mac-avcapture: Update plugin to ObjC and modern Cocoa APIs by PatTheMav · Pull Request #9344 · obsproject/obs-studio (original) (raw)

Description

This PR effectively reimplements the video capture device support in OBS Studio using ObjC classes and language features and deprecates the old variant to ensure functionality of existing scene setups.

Important

This PR has been updated since its original publishing. The changes are detailed in the following section.

Updates to the PR

Motivation and Context

Application development on macOS has become Swift-oriented, which means that many current and future Frameworks (and associated documentation) make the assumption of Swift being used by default.

To allow OBS Studio to move with the times, existing macOS-specific code needs to be migrated to more Apple-specific coding practices and languages, which can serve as a stepping stone toward full Swift implementations.

Specifically this requires moving C/C++-oriented code to ObjC (and also migrate ObjC++ code to ObjC, as Cocoa/CoreFoundation solutions are usually more elegant and easier to maintain than C++ code).

This also allowed separating plugin code functionality into specific blocks (plugin main entry point, plugin functionality, property API, native ObjC type classes), which makes the code more readable and avoids having a single big source code file.

Native ObjC types and helper functions also allow pure ObjC code to interact more elegantly with native C types provided by libobs, which will probably be necessary to correctly bridge to Swift at a later point as well.

Note

This PR also makes use of native async queues which allow otherwise slow or blocking operations (creating and starting a capture session in this case) be handled by Cocoa's event loop and its ephemeral worker threads without the need for libobs bespoke threads (function calls exclusively happening off the main thread are not queued async because OBS Studio code expects code in those threads to be sync mostly).

How Has This Been Tested?

Tested on macOS 13 with built-in Studio Display Camera and Logitech C922 external camera:

Notable Quirk

Note

There is a quirk present in the old video capture source plugin already and has not been fixed in the rewrite (I attempted to do so but this created an entire new slew of issues):

When a device is initialised with a preset, the sample buffer dimensions provided by AVFoundation is defined by that preset. This will be independent from the actual device resolution:

  1. Select "1280x720" preset
  2. Observe sample buffer resolution of 1280x720 pixels
  3. Change internal device resolution to 160x120 pixels
  4. Observe sample buffer resolution is kept at 1280x720 pixels, but image is pillarboxed and upscaled from the 160x120 device image
  5. Select "1920x1280" preset
  6. Observe sample buffer resolution is changed to 1920x1080 pixels, but image is pillarboxed and upscaled still

This does noticeably not happen if preset usage is disabled before selecting a device, as the capture session will be set up with a device format directly. Interestingly the sample buffer provided by AVFoundation will then keep track with the actual device resolution, but once presets are used, the buffer size is determined by that entirely.

No Automatic Selection of Properties

Important

The automatic property selection of the old plugin has not been reimplemented, requiring users to explicitly select the colour format, frame rate, and other properties when not using presets. Presets are the canonical and best way to use video capture input.

Quirks with iOS Devices

Using iOS devices as camera sources via USB still works (including muxed audio), but retains some notable quirks:

When used just for camera functionality, "Continuity Camera" should be preferred and capture via USB used for capture of screen contents.

Types of changes

Checklist: