Streaming from Multiple Cameras
Capturing the User and the World 📱 👩🏾 ⛰️
VideoKit supports multi-camera streaming, where multiple independent cameras stream simultaneously. This enables you to build highly interactive media experiences in your application.
Currently, multi-camera devices can only be discovered on Android and iOS. Other platforms do not support them.
Streaming multiple cameras requires an active VideoKit Core plan.
Using the Camera Manager Component
VideoKit provides the VideoKitCameraManager
component to
simplify working with camera devices in Unity scenes.
Adding the Component
The first step is to add a VideoKitCameraManager
component in your scene.
Next, select both User
and World
in the Facing
property:
[GIF HERE]
Finally, change the Resolution
and Frame Rate
to Default
.
Setting the Resolution
or Frame Rate
to anything other than Default
might cause the multi-camera
device to fail to start streaming.
Displaying the Preview
To display the camera preview, first create a UI RawImage
and add a VideoKitCameraView
component:
Finally, choose a camera to display:
[GIF HERE]
Make sure that the camera view Facing
is set to either User
or World
, but not both.
Now build and run on your Android or iOS device!
Using the VideoKit API
Multi-camera streaming is exposed with the MultiCameraDevice
class.
Discovering Camera Devices
Because camera devices access highly sensitive user data, the very first step is to request
camera permissions from the user. We provide the
MultiCameraDevice.CheckPermissions
method for this:
// Request camera permisisons from the user
PermissionStatus status = await MultiCameraDevice.CheckPermissions(request: true);
// Check that the user granted permissions
if (status != PermissionStatus.Granted)
return;
You can also use the CameraDevice.CheckPermissions
method. They are equivalent.
Once camera permisisons have been granted, we can now discover available multi-camera devices using the
MultiCameraDevice.Discover
method:
using System.Linq;
// Discover available multi-camera devices
MultiCameraDevice[] multiCameraDevices = await MultiCameraDevice.Discover();
// Use the first multi-camera device
MultiCameraDevice multiCameraDevice = multiCameraDevices.FirstOrDefault();
Streaming Pixel Buffers
To begin streaming pixel buffers from the multi-camera device, use the MultiCameraDevice.StartRunning
method:
// Start streaming pixel buffers from the multi-camera
multiCameraDevice.StartRunning(OnPixelBuffer);
You must provide a callback that will be invoked with pixel buffers as they become available:
void OnPixelBuffer (CameraDevice cameraDevice, PixelBuffer pixelBuffer) {
// ...
}
The provided callback is invoked with the latest PixelBuffer
along with the
CameraDevice
that generated it.
The MultiCameraDevice
will always invoke this callback on a dedicated camera thread--not the Unity main thread.
As a result, do not call Unity methods from this callback.
Displaying Pixel Buffers
The PixelBuffer
instances provided to the preview callback contain raw pixel
data. This data is usually optimized for the camera system, which means that it is often in a planar format
(e.g. YUV) and has no geometric transformations applied (i.e. orientation, mirroring).
Before we can display the pixel data, we must first convert it to the RGBA8888
pixel format and apply any
relevant geometric transformations. First, allocate a NativeArray
to hold the transformed pixel data
and a Texture2D
to actually display the pixel data:
private NativeArray<byte> rgbaData;
private Texture2D texture;
void StartCamera () {
// Get the current preview resolution from the camera
var (width, height) = camera.previewResolution;
// Create preview data
rgbaData = new NativeArray<byte>(width * height * 4, Allocator.Persistent);
// Create the preview texture
texture = new Texture2D(width, height, TextureFormat.RGBA32, false);
// Start streaming pixel buffers from the camera
multiCameraDevice.StartRunning(OnPixelBuffer);
}
When we receive a new pixel buffer in OnPixelBuffer
, we can use the
PixelBuffer.CopyTo
method to copy the pixel data to our rgbaData
while applying relevant pixel format and geometric transformations:
void OnPixelBuffer (CameraDevice cameraDevice, PixelBuffer cameraBuffer) {
// Ensure that we only handle the first camera device in the multi-camera device
if (cameraDevice != multiCameraDevice.cameras[0])
return;
lock (texture) {
// Create a destination `PixelBuffer` backed by our preview data
using var previewBuffer = new PixelBuffer(
cameraBuffer.width,
cameraBuffer.height,
PixelBuffer.Format.RGBA8888,
rgbaData
);
// Copy the pixel data from the camera buffer to our preview buffer
cameraBuffer.CopyTo(previewBuffer, rotation: PixelBuffer.Rotation._180);
}
}
If your rotation
is Rotation._90
or Rotation._270
, make sure to swap the width
and height
when
creating the previewBuffer
pixel buffer.
Finally, use your script's Update
method to upload the preview data into a Texture2D
for viewing:
void Update () {
// Update the preview texture with the latest preview data
lock (texture)
texture.GetRawTextureData<byte>().CopyFrom(rgbaData);
// Upload the texture data to the GPU for display
texture.Apply();
}
The VideoKitCameraView
component handles this process
for you.