HomeiOS Developmentios - Find out how to write depth information (AVDepthData) to photograph...

ios – Find out how to write depth information (AVDepthData) to photograph file from AVCapturePhoto object?


I’ve applied answer just like Apple’s instance of capturing depth information with iPhone lidar digicam. Principal code snippets are as observe:

  1. Setting depth codecs
let system = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, place: .again)!
let vidFormatsWithDepth = system.codecs.filter { format in
    format.formatDescription.dimensions.width == 1920 &&
    format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
    !format.isVideoBinned &&
    !format.supportedDepthDataFormats.isEmpty &&
    format.supportedDepthDataFormats.accommodates { $0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16 }
}

if let format = vidFormatsWithDepth.first {
    let depthFormats = format.supportedDepthDataFormats.filter { $0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16 }
    strive! system.lockForConfiguration()
    system.activeFormat = format
    system.activeDepthDataFormat = depthFormats.final
    system.unlockForConfiguration()
}
  1. Photograph output
func setUpPhotoOutput() {
    photoOutput = AVCapturePhotoOutput()
    photoOutput.maxPhotoQualityPrioritization = .high quality
    self.captureSession.addOutput(photoOutput)
    photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
}
  1. Capturing photograph
var format: [String: Any] = [:]
if photoOutput.availablePhotoPixelFormatTypes.accommodates(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
    format[kCVPixelBufferPixelFormatTypeKey as String] = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
}
let settings = AVCapturePhotoSettings(format: format)
settings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled
settings.isDepthDataFiltered = photoOutput.isDepthDataDeliveryEnabled
settings.embedsDepthDataInPhoto = photoOutput.isDepthDataDeliveryEnabled
photoOutput.capturePhoto(with: settings , delegate: self)
  1. Processing captured photograph information
func createPhotoFile(
    photograph: AVCapturePhoto
) {
    let customizer = PhotoDataCustomizer()
    var mainImageData = photograph.fileDataRepresentation(with: customizer)!
    // notice mainImageData ought to have embeded depth information, however...
    let imageSource = CGImageSourceCreateWithData(mainImageData as CFData, nil)!
    let depthDataDict = CGImageSourceCopyAuxiliaryDataInfoAtIndex(
        imageSource,
        0,
        kCGImageAuxiliaryDataTypeDepth
    )
    let disparityDataDict = CGImageSourceCopyAuxiliaryDataInfoAtIndex(
        imageSource,
        0,
        kCGImageAuxiliaryDataTypeDisparity
    )
    print("depthDataDict", depthDataDict ?? "nil")
    print("disparityDataDict", disparityDataDict ?? "nil")
    // ... each depthDataDict and disparityDataDict come out as nil
}

class PhotoDataCustomizer: NSObject, AVCapturePhotoFileDataRepresentationCustomizer {
    func replacementDepthData(for photograph: AVCapturePhoto) -> AVDepthData? {
        let depthData = photograph.depthData?.changing(toDepthDataType: kCVPixelFormatType_DepthFloat16)
        return depthData
    }
}

AVCapturePhoto’s photograph.depthData is current (not nil) and I’d count on that it’s embedded if settings.embedsDepthDataInPhoto = true, however each variants of depth information (kCGImageAuxiliaryDataTypeDepth, kCGImageAuxiliaryDataTypeDisparity) come out nil from CGImageSource.

Find out how to correctly learn the depth information from photograph file … or correctly write the depth information within the first place?

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments