Is there a way to optimise the loop in the below swift ui code

I am trying to feed a group of images in an iphone using swiftui to a tflite model and get results. The shape the model accepts is (1, 640, 640, 3). So after a bit of researching found a code that would relatively map the image data in to the above shape such that it could be fed in to the model. The code I referred to is Perform Inference on Input Data – Firebase. I did some minor changes to the code and was able to achieve the below code,

func preprocessImage(image: CGImage) -> Data? {
    guard let context = CGContext(
      data: nil,
      width: image.width,
      height: image.height,
      bitsPerComponent: 8,
      bytesPerRow: image.width * 4,
      space: CGColorSpaceCreateDeviceRGB(),
      bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue
    ) else {
        return nil
    }

    context.draw(image, in: CGRect(x: 0, y: 0, width: image.width, height: image.height))
    guard let imageData = context.data else { return nil }

    var inputData = Data()
    var count = 0
    for row in 0..<640 {
        for col in 0..<640 {
            count+=1
            let offset = 4 * (row * context.width + col)
            // (Ignore offset 0, the unused alpha channel)
            let red = imageData.load(fromByteOffset: offset + 1, as: UInt8.self)
            let green = imageData.load(fromByteOffset: offset + 2, as: UInt8.self)
            let blue = imageData.load(fromByteOffset: offset + 3, as: UInt8.self)

            // Normalize channel values to [0.0, 1.0]
            var normalizedRed = Float32(red) / 255.0
            var normalizedGreen = Float32(green) / 255.0
            var normalizedBlue = Float32(blue) / 255.0

            // Append normalized values to Data object in RGB order.
            let elementSize = MemoryLayout.size(ofValue: normalizedRed)
            var bytes = [UInt8](repeating: 0, count: elementSize)
            memcpy(&bytes, &normalizedRed, elementSize)
            inputData.append(&bytes, count: elementSize)
            memcpy(&bytes, &normalizedGreen, elementSize)
            inputData.append(&bytes, count: elementSize)
            memcpy(&bytes, &normalizedBlue, elementSize)
            inputData.append(&bytes, count: elementSize)
        }
    }
    print(count)
    return inputData
}

But my concern is that the loop runs 409600 times for each image which I would really love to reduce. Is there any standard way or any other alternate to achieve the same functionality but by reducing the loop runs?

Thanks in advance.

  • You’re concerned about every offset in a two-dimensional array. It’s impossible, unless you want to ignore some rows or columns. If not, it should be a normal iterator like you did.

    – 

  • @son but i feel like it’s an intensive task. I tried using try interpreter.resizeInput(at: 0, to: Tensor.Shape( [1, 640, 640, 3])), but it doesn’t reshape the input

    – 

  • 1

    There is no SwiftUI here. Only Swift.

    – 

Leave a Comment