Class IRtcEngineAbstract

The basic interface of the Agora SDK that implements the core functions of real-time communication.

IRtcEngine provides the main methods that your app can call. Before calling other APIs, you must call createAgoraRtcEngine to create an IRtcEngine object.

Hierarchy

Constructors

Methods

_addListenerPreCheck addListener addVideoWatermark adjustAudioMixingPlayoutVolume adjustAudioMixingPublishVolume adjustAudioMixingVolume adjustCustomAudioPlayoutVolume adjustCustomAudioPublishVolume adjustPlaybackSignalVolume adjustRecordingSignalVolume adjustUserPlaybackSignalVolume clearVideoWatermarks complain configRhythmPlayer createCustomVideoTrack createDataStream createMediaPlayer destroyCustomVideoTrack destroyMediaPlayer disableAudio disableAudioSpectrumMonitor disableVideo enableAudio enableAudioSpectrumMonitor enableAudioVolumeIndication enableCameraCenterStage enableContentInspect enableCustomAudioLocalPlayback enableDualStreamMode enableEncryption enableExtension enableFaceDetection enableInEarMonitoring enableInstantMediaRendering enableLocalAudio enableLocalVideo enableMultiCamera enableSoundPositionIndication enableSpatialAudio enableVideo enableVideoImageSource enableVirtualBackground enableVoiceAITuner enableWebSdkInteroperability getAudioDeviceInfo getAudioDeviceManager getAudioMixingCurrentPosition getAudioMixingDuration getAudioMixingPlayoutVolume getAudioMixingPublishVolume getAudioTrackCount getCameraMaxZoomFactor getConnectionState getCurrentMonotonicTimeInMs getEffectCurrentPosition getEffectDuration getEffectsVolume getErrorDescription getExtensionProperty getLocalSpatialAudioEngine getMediaEngine getMusicContentCenter getNativeHandle getNetworkType getNtpWallTimeInMs getUserInfoByUid getUserInfoByUserAccount getVersion getVideoDeviceManager getVolumeOfEffect initialize isCameraAutoExposureFaceModeSupported isCameraAutoFocusFaceModeSupported isCameraCenterStageSupported isCameraExposurePositionSupported isCameraExposureSupported isCameraFaceDetectSupported isCameraFocusSupported isCameraTorchSupported isCameraZoomSupported isFeatureAvailableOnDevice isSpeakerphoneEnabled joinChannel joinChannelWithUserAccount joinChannelWithUserAccountEx leaveChannel loadExtensionProvider muteAllRemoteAudioStreams muteAllRemoteVideoStreams muteLocalAudioStream muteLocalVideoStream muteRecordingSignal muteRemoteAudioStream muteRemoteVideoStream pauseAllChannelMediaRelay pauseAllEffects pauseAudioMixing pauseEffect playAllEffects playEffect preloadChannel preloadChannelWithUserAccount preloadEffect queryCameraFocalLengthCapability queryCodecCapability queryDeviceScore queryScreenCaptureCapability rate registerAudioEncodedFrameObserver registerAudioSpectrumObserver registerEventHandler registerExtension registerLocalUserAccount registerMediaMetadataObserver release removeAllListeners removeListener renewToken resumeAllChannelMediaRelay resumeAllEffects resumeAudioMixing resumeEffect selectAudioTrack sendCustomReportMessage sendMetaData sendStreamMessage setAINSMode setAdvancedAudioOptions setAudioEffectParameters setAudioEffectPreset setAudioMixingDualMonoMode setAudioMixingPitch setAudioMixingPlaybackSpeed setAudioMixingPosition setAudioProfile setAudioScenario setAudioSessionOperationRestriction setBeautyEffectOptions setCameraAutoExposureFaceModeEnabled setCameraAutoFocusFaceModeEnabled setCameraCapturerConfiguration setCameraDeviceOrientation setCameraExposureFactor setCameraExposurePosition setCameraFocusPositionInPreview setCameraStabilizationMode setCameraTorchOn setCameraZoomFactor setChannelProfile setClientRole setCloudProxy setColorEnhanceOptions setDefaultAudioRouteToSpeakerphone setDirectCdnStreamingAudioConfiguration setDirectCdnStreamingVideoConfiguration setDualStreamMode setEarMonitoringAudioFrameParameters setEffectPosition setEffectsVolume setEnableSpeakerphone setExtensionProperty setExtensionProviderProperty setHeadphoneEQParameters setHeadphoneEQPreset setInEarMonitoringVolume setLocalRenderMode setLocalVideoMirrorMode setLocalVoiceEqualization setLocalVoiceFormant setLocalVoicePitch setLocalVoiceReverb setLogFile setLogFileSize setLogFilter setLogLevel setLowlightEnhanceOptions setMaxMetadataSize setMixedAudioFrameParameters setParameters setPlaybackAudioFrameBeforeMixingParameters setPlaybackAudioFrameParameters setRecordingAudioFrameParameters setRemoteDefaultVideoStreamType setRemoteRenderMode setRemoteSubscribeFallbackOption setRemoteUserSpatialAudioParams setRemoteVideoStreamType setRemoteVideoSubscriptionOptions setRemoteVoicePosition setRouteInCommunicationMode setScreenCaptureContentHint setScreenCaptureScenario setSubscribeAudioAllowlist setSubscribeAudioBlocklist setSubscribeVideoAllowlist setSubscribeVideoBlocklist setVideoDenoiserOptions setVideoEncoderConfiguration setVideoScenario setVoiceBeautifierParameters setVoiceBeautifierPreset setVoiceConversionPreset setVolumeOfEffect startAudioMixing startAudioRecording startCameraCapture startDirectCdnStreaming startEchoTest startLastmileProbeTest startLocalVideoTranscoder startMediaRenderingTracing startOrUpdateChannelMediaRelay startPreview startPreviewWithoutSourceType startRhythmPlayer startRtmpStreamWithTranscoding startRtmpStreamWithoutTranscoding startScreenCapture stopAllEffects stopAudioMixing stopAudioRecording stopCameraCapture stopChannelMediaRelay stopDirectCdnStreaming stopEchoTest stopEffect stopLastmileProbeTest stopLocalVideoTranscoder stopPreview stopRhythmPlayer stopRtmpStream stopScreenCapture switchCamera takeSnapshot unloadAllEffects unloadEffect unregisterAudioEncodedFrameObserver unregisterAudioSpectrumObserver unregisterEventHandler unregisterMediaMetadataObserver updateChannelMediaOptions updateLocalTranscoderConfiguration updatePreloadChannelToken updateRtmpTranscoding updateScreenCapture updateScreenCaptureParameters updateScreenCaptureRegion

Constructors

Methods

  • Adds one IRtcEngineEvent listener. After calling this method, you can listen for the corresponding events in the IRtcEngine object and obtain data through IRtcEngineEvent. Depending on your project needs, you can add multiple listeners for the same event.

    Type Parameters

    Parameters

    • eventType: EventType

      The name of the target event to listen for. See IRtcEngineEvent.

    • listener: IRtcEngineEvent[EventType]

      The callback function for eventType. Take adding a listener for onJoinChannelSuccess as an example: // Create an onJoinChannelSuccess object const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {}; // Add one onJoinChannelSuccess listener engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess);

    Returns void

  • Adds a watermark image to the local video.

    This method adds a PNG watermark image to the local video in the live streaming. Once the watermark image is added, all the audience in the channel (CDN audience included), and the capturing device can see and capture it. The Agora SDK supports adding only one watermark image onto a local video or CDN live stream. The newly added watermark image replaces the previous one. The watermark coordinates are dependent on the settings in the setVideoEncoderConfiguration method: If the orientation mode of the encoding video (OrientationMode) is fixed landscape mode or the adaptive landscape mode, the watermark uses the landscape orientation. If the orientation mode of the encoding video (OrientationMode) is fixed portrait mode or the adaptive portrait mode, the watermark uses the portrait orientation. When setting the watermark position, the region must be less than the dimensions set in the setVideoEncoderConfiguration method; otherwise, the watermark image will be cropped. Ensure that calling this method after enableVideo. If you only want to add a watermark to the media push, you can call this method or the startRtmpStreamWithTranscoding method. This method supports adding a watermark image in the PNG file format only. Supported pixel formats of the PNG image are RGBA, RGB, Palette, Gray, and Alpha_gray. If the dimensions of the PNG image differ from your settings in this method, the image will be cropped or zoomed to conform to your settings. If you have enabled the mirror mode for the local video, the watermark on the local video is also mirrored. To avoid mirroring the watermark, Agora recommends that you do not use the mirror and watermark functions for the local video at the same time. You can implement the watermark function in your application layer.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • watermarkUrl: string

      The local file path of the watermark image to be added. This method supports adding a watermark image from the local absolute or relative file path.

    • options: WatermarkOptions

      The options of the watermark image to be added. See WatermarkOptions.

    Returns number

  • Adjusts the volume of audio mixing for local playback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      The volume of audio mixing for local playback. The value ranges between 0 and 100 (default). 100 represents the original volume.

    Returns number

  • Adjusts the volume of audio mixing for publishing.

    This method adjusts the volume of audio mixing for publishing (sending to other users).

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      The volume of audio mixing for local playback. The value ranges between 0 and 100 (default). 100 represents the original volume.

    Returns number

  • Adjusts the volume during audio mixing.

    This method adjusts the audio mixing volume on both the local client and remote clients. This method does not affect the volume of the audio file set in the playEffect method.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      Audio mixing volume. The value ranges between 0 and 100. The default value is 100, which means the original volume.

    Returns number

  • Adjusts the volume of the custom audio track played locally.

    Ensure you have called the createCustomAudioTrack method to create a custom audio track before calling this method. If you want to change the volume of the audio to be played locally, you need to call this method again.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • trackId: number

      The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack.

    • volume: number

      The volume of the audio source. The value can range from 0 to 100. 0 means mute; 100 means the original volume.

    Returns number

  • Adjusts the volume of the custom audio track played remotely.

    Ensure you have called the createCustomAudioTrack method to create a custom audio track before calling this method. If you want to change the volume of the audio played remotely, you need to call this method again.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • trackId: number

      The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack.

    • volume: number

      The volume of the audio source. The value can range from 0 to 100. 0 means mute; 100 means the original volume.

    Returns number

  • Adjusts the playback signal volume of all remote users.

    This method is used to adjust the signal volume of all remote users mixed and played locally. If you need to adjust the signal volume of a specified remote user played locally, it is recommended that you call adjustUserPlaybackSignalVolume instead.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      The volume of the user. The value range is [0,400]. 0: Mute. 100: (Default) The original volume. 400: Four times the original volume (amplifying the audio signals by four times).

    Returns number

  • Adjusts the capturing signal volume.

    If you only need to mute the audio signal, Agora recommends that you use muteRecordingSignal instead.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      The volume of the user. The value range is [0,400]. 0: Mute. 100: (Default) The original volume. 400: Four times the original volume (amplifying the audio signals by four times).

    Returns number

  • Adjusts the playback signal volume of a specified remote user.

    You can call this method to adjust the playback volume of a specified remote user. To adjust the playback volume of different remote users, call the method as many times, once for each remote user.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the remote user.

    • volume: number

      The volume of the user. The value range is [0,400]. 0: Mute. 100: (Default) The original volume. 400: Four times the original volume (amplifying the audio signals by four times).

    Returns number

  • Removes the watermark image from the video stream.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Allows a user to complain about the call quality after a call ends.

    This method allows users to complain about the quality of the call. Call this method after the user leaves the channel.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid. -7: The method is called before IRtcEngine is initialized.

    Parameters

    • callId: string

      The current call ID. You can get the call ID by calling getCallId.

    • description: string

      A description of the call. The string length should be less than 800 bytes.

    Returns number

  • Configures the virtual metronome.

    After calling startRhythmPlayer, you can call this method to reconfigure the virtual metronome. After enabling the virtual metronome, the SDK plays the specified audio effect file from the beginning, and controls the playback duration of each file according to beatsPerMinute you set in AgoraRhythmPlayerConfig. For example, if you set beatsPerMinute as 60, the SDK plays one beat every second. If the file duration exceeds the beat duration, the SDK only plays the audio within the beat duration. By default, the sound of the virtual metronome is published in the channel. If you want the sound to be heard by the remote users, you can set publishRhythmPlayerTrack in ChannelMediaOptions as true.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Creates a custom video track.

    To publish a custom video source, see the following steps: Call this method to create a video track and get the video track ID. Call joinChannel to join the channel. In ChannelMediaOptions, set customVideoTrackId to the video track ID that you want to publish, and set publishCustomVideoTrack to true. Call pushVideoFrame and specify videoTrackId as the video track ID set in step 2. You can then publish the corresponding custom video source in the channel.

    Returns

    If the method call is successful, the video track ID is returned as the unique identifier of the video track. If the method call fails, 0xffffffff is returned.

    Returns number

  • Creates a data stream.

    Returns

    ID of the created data stream, if the method call succeeds. < 0: Failure.

    Parameters

    • config: DataStreamConfig

      The configurations for the data stream. See DataStreamConfig.

    Returns number

  • Creates a media player object.

    Before calling any APIs in the IMediaPlayer class, you need to call this method to create an instance of the media player. If you need to create multiple instances, you can call this method multiple times.

    Returns

    An IMediaPlayer object, if the method call succeeds. An empty pointer, if the method call fails.

    Returns IMediaPlayer

  • Destroys the specified video track.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • videoTrackId: number

      The video track ID returned by calling the createCustomVideoTrack method.

    Returns number

  • Destroys the media player instance.

    Returns

    ≥ 0: Success. Returns the ID of media player instance. < 0: Failure.

    Parameters

    Returns number

  • Disables the audio module.

    The audio module is enabled by default, and you can call this method to disable the audio module.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Disables audio spectrum monitoring.

    After calling enableAudioSpectrumMonitor, if you want to disable audio spectrum monitoring, you can call this method. You can call this method either before or after joining a channel.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Disables the video module.

    This method is used to disable the video module.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Enables the audio module.

    The audio module is enabled by default After calling disableAudio to disable the audio module, you can call this method to re-enable it.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Turns on audio spectrum monitoring.

    If you want to obtain the audio spectrum data of local or remote users, you can register the audio spectrum observer and enable audio spectrum monitoring. You can call this method either before or after joining a channel.

    Returns

    0: Success. < 0: Failure. -2: Invalid parameters.

    Parameters

    • Optional intervalInMS: number

      The interval (in milliseconds) at which the SDK triggers the onLocalAudioSpectrum and onRemoteAudioSpectrum callbacks. The default value is 100. Do not set this parameter to a value less than 10, otherwise calling this method would fail.

    Returns number

  • Enables the reporting of users' volume indication.

    This method enables the SDK to regularly report the volume information to the app of the local user who sends a stream and remote users (three users at most) whose instantaneous volumes are the highest.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • interval: number

      Sets the time interval between two consecutive volume indications: ≤ 0: Disables the volume indication.

      0: Time interval (ms) between two consecutive volume indications. Ensure this parameter is set to a value greater than 10, otherwise you will not receive the onAudioVolumeIndication callback. Agora recommends that this value is set as greater than 100.

    • smooth: number

      The smoothing factor that sets the sensitivity of the audio volume indicator. The value ranges between 0 and 10. The recommended value is 3. The greater the value, the more sensitive the indicator.

    • reportVad: boolean

      true : Enables the voice activity detection of the local user. Once it is enabled, the vad parameter of the onAudioVolumeIndication callback reports the voice activity status of the local user. false : (Default) Disables the voice activity detection of the local user. Once it is disabled, the vad parameter of the onAudioVolumeIndication callback does not report the voice activity status of the local user, except for the scenario where the engine automatically detects the voice activity of the local user.

    Returns number

  • Enables or disables portrait center stage.

    The portrait center stage feature is off by default. You need to call this method to turn it on. If you need to disable this feature, you need to call this method again and set enabled to false. This method applies to iOS only.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable the portrait center stage: true : Enable portrait center stage. false : Disable portrait center stage.

    Returns number

  • Enables or disables video screenshot and upload.

    When video screenshot and upload function is enabled, the SDK takes screenshots and uploads videos sent by local users based on the type and frequency of the module you set in ContentInspectConfig. After video screenshot and upload, the Agora server sends the callback notification to your app server in HTTPS requests and sends all screenshots to the third-party cloud storage service.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enalbe video screenshot and upload: true : Enables video screenshot and upload. false : Disables video screenshot and upload.

    • config: ContentInspectConfig

      Screenshot and upload configuration. See ContentInspectConfig.

    Returns number

  • Sets whether to enable the local playback of external audio source.

    Ensure you have called the createCustomAudioTrack method to create a custom audio track before calling this method. After calling this method to enable the local playback of external audio source, if you need to stop local playback, you can call this method again and set enabled to false. You can call adjustCustomAudioPlayoutVolume to adjust the local playback volume of the custom audio track.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • trackId: number

      The audio track ID. Set this parameter to the custom audio track ID returned in createCustomAudioTrack.

    • enabled: boolean

      Whether to play the external audio source: true : Play the external audio source. false : (Default) Do not play the external source.

    Returns number

  • Sets the dual-stream mode on the sender side and the low-quality video stream.

    Deprecated: This method is deprecated as of v4.2.0. Use setDualStreamMode instead. You can call this method to enable or disable the dual-stream mode on the publisher side. Dual streams are a pairing of a high-quality video stream and a low-quality video stream: High-quality video stream: High bitrate, high resolution. Low-quality video stream: Low bitrate, low resolution. After you enable dual-stream mode, you can call setRemoteVideoStreamType to choose to receive either the high-quality video stream or the low-quality video stream on the subscriber side. This method is applicable to all types of streams from the sender, including but not limited to video streams collected from cameras, screen sharing streams, and custom-collected video streams. If you need to enable dual video streams in a multi-channel scenario, you can call the enableDualStreamModeEx method. You can call this method either before or after joining a channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable dual-stream mode: true : Enable dual-stream mode. false : (Default) Disable dual-stream mode.

    • Optional streamConfig: SimulcastStreamConfig

      The configuration of the low-quality video stream. See SimulcastStreamConfig. When setting mode to DisableSimulcastStream, setting streamConfig will not take effect.

    Returns number

  • Enables or disables the built-in encryption.

    After the user leaves the channel, the SDK automatically disables the built-in encryption. To enable the built-in encryption, call this method before the user joins the channel again.

    Returns

    0: Success. < 0: Failure. -2: An invalid parameter is used. Set the parameter with a valid value. -4: The built-in encryption mode is incorrect or the SDK fails to load the external encryption library. Check the enumeration or reload the external encryption library. -7: The SDK is not initialized. Initialize the IRtcEngine instance before calling this method.

    Parameters

    • enabled: boolean

      Whether to enable built-in encryption: true : Enable the built-in encryption. false : (Default) Disable the built-in encryption.

    • config: EncryptionConfig

      Built-in encryption configurations. See EncryptionConfig.

    Returns number

  • Enables or disables extensions.

    Returns

    0: Success. < 0: Failure. -3: The extension library is not loaded. Agora recommends that you check the storage location or the name of the dynamic library.

    Parameters

    • provider: string

      The name of the extension provider.

    • extension: string

      The name of the extension.

    • Optional enable: boolean

      Whether to enable the extension: true : Enable the extension. false : Disable the extension.

    • Optional type: MediaSourceType

      Source type of the extension. See MediaSourceType.

    Returns number

  • Enables or disables face detection for the local user.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable face detection for the local user: true : Enable face detection. false : (Default) Disable face detection.

    Returns number

  • Enables in-ear monitoring.

    This method enables or disables in-ear monitoring.

    Returns

    0: Success. < 0: Failure.

    • 8: Make sure the current audio routing is Bluetooth or headset.

    Parameters

    • enabled: boolean

      Enables or disables in-ear monitoring. true : Enables in-ear monitoring. false : (Default) Disables in-ear monitoring.

    • includeAudioFilters: EarMonitoringFilterType

      The audio filter types of in-ear monitoring. See EarMonitoringFilterType.

    Returns number

  • Enables audio and video frame instant rendering.

    After successfully calling this method, the SDK enables the instant frame rendering mode, which can speed up the first frame rendering after the user joins the channel.

    Returns

    0: Success. < 0: Failure. -7: The method is called before IRtcEngine is initialized.

    Returns number

  • Enables or disables the local audio capture.

    The audio function is enabled by default when users joining a channel. This method disables or re-enables the local audio function to stop or restart local audio capturing. The difference between this method and muteLocalAudioStream are as follows: enableLocalAudio : Disables or re-enables the local audio capturing and processing. If you disable or re-enable local audio capturing using the enableLocalAudio method, the local user might hear a pause in the remote audio playback. muteLocalAudioStream : Sends or stops sending the local audio streams without affecting the audio capture status.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      true : (Default) Re-enable the local audio function, that is, to start the local audio capturing device (for example, the microphone). false : Disable the local audio function, that is, to stop local audio capturing.

    Returns number

  • Enables/Disables the local video capture.

    This method disables or re-enables the local video capture, and does not affect receiving the remote video stream. After calling enableVideo, the local video capture is enabled by default. If you call enableLocalVideo (false) to disable local video capture within the channel, it also simultaneously stops publishing the video stream within the channel. If you want to restart video catpure, you can call enableLocalVideo (true) and then call updateChannelMediaOptions to set the options parameter to publish the locally captured video stream in the channel. After the local video capturer is successfully disabled or re-enabled, the SDK triggers the onRemoteVideoStateChanged callback on the remote client. You can call this method either before or after joining a channel. This method enables the internal engine and is valid after leaving the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable the local video capture. true : (Default) Enable the local video capture. false : Disable the local video capture. Once the local video is disabled, the remote users cannot receive the video stream of the local user, while the local user can still receive the video streams of remote users. When set to false, this method does not require a local camera.

    Returns number

  • Enables or disables multi-camera capture.

    In scenarios where there are existing cameras to capture video, Agora recommends that you use the following steps to capture and publish video with multiple cameras: Call this method to enable multi-channel camera capture. Call startPreview to start the local video preview. Call startCameraCapture, and set sourceType to start video capture with the second camera. Call joinChannelEx, and set publishSecondaryCameraTrack to true to publish the video stream captured by the second camera in the channel. If you want to disable multi-channel camera capture, use the following steps: Call stopCameraCapture. Call this method with enabled set to false. You can call this method before and after startPreview to enable multi-camera capture: If it is enabled before startPreview, the local video preview shows the image captured by the two cameras at the same time. If it is enabled after startPreview, the SDK stops the current camera capture first, and then enables the primary camera and the second camera. The local video preview appears black for a short time, and then automatically returns to normal. This method applies to iOS only. When using this function, ensure that the system version is 13.0 or later. The minimum iOS device types that support multi-camera capture are as follows: iPhone XR iPhone XS iPhone XS Max iPad Pro 3rd generation and later

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable multi-camera video capture mode: true : Enable multi-camera capture mode; the SDK uses multiple cameras to capture video. false : Disable multi-camera capture mode; the SDK uses a single camera to capture video.

    • config: CameraCapturerConfiguration

      Capture configuration for the second camera. See CameraCapturerConfiguration.

    Returns number

  • Enables or disables stereo panning for remote users.

    Ensure that you call this method before joining a channel to enable stereo panning for remote users so that the local user can track the position of a remote user by calling setRemoteVoicePosition.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable stereo panning for remote users: true : Enable stereo panning. false : Disable stereo panning.

    Returns number

  • Enables or disables the spatial audio effect.

    After enabling the spatial audio effect, you can call setRemoteUserSpatialAudioParams to set the spatial audio effect parameters of the remote user. You can call this method either before or after joining a channel. This method relies on the spatial audio dynamic library libagora_spatial_audio_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable the spatial audio effect: true : Enable the spatial audio effect. false : Disable the spatial audio effect.

    Returns number

  • Enables the video module.

    The video module is disabled by default, call this method to enable it. If you need to disable the video module later, you need to call disableVideo.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Sets whether to replace the current video feeds with images when publishing video streams.

    When publishing video streams, you can call this method to replace the current video feeds with custom images. Once you enable this function, you can select images to replace the video feeds through the ImageTrackOptions parameter. If you disable this function, the remote users see the video feeds that you publish.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enable: boolean

      Whether to replace the current video feeds with custom images: true : Replace the current video feeds with custom images. false : (Default) Do not replace the current video feeds with custom images.

    • options: ImageTrackOptions

      Image configurations. See ImageTrackOptions.

    Returns number

  • Enables/Disables the virtual background.

    The virtual background feature enables the local user to replace their original background with a static image, dynamic video, blurred background, or portrait-background segmentation to achieve picture-in-picture effect. Once the virtual background feature is enabled, all users in the channel can see the custom background. Call this method after calling enableVideo or startPreview. This feature has high requirements on device performance. When calling this method, the SDK automatically checks the capabilities of the current device. Agora recommends you use virtual background on devices with the following processors: Snapdragon 700 series 750G and later Snapdragon 800 series 835 and later Dimensity 700 series 720 and later Kirin 800 series 810 and later Kirin 900 series 980 and later Devices with an A9 chip and better, as follows: iPhone 6S and later iPad Air 3rd generation and later iPad 5th generation and later iPad Pro 1st generation and later iPad mini 5th generation and later Agora recommends that you use this feature in scenarios that meet the following conditions: A high-definition camera device is used, and the environment is uniformly lit. There are few objects in the captured video. Portraits are half-length and unobstructed. Ensure that the background is a solid color that is different from the color of the user's clothing. This method relies on the virtual background dynamic library libagora_segmentation_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure. -4: The device capabilities do not meet the requirements for the virtual background feature. Agora recommends you try it on devices with higher performance.

    Parameters

    • enabled: boolean

      Whether to enable virtual background: true : Enable virtual background. false : Disable virtual background.

    • backgroundSource: VirtualBackgroundSource

      The custom background. See VirtualBackgroundSource. To adapt the resolution of the custom background image to that of the video captured by the SDK, the SDK scales and crops the custom background image while ensuring that the content of the custom background image is not distorted.

    • segproperty: SegmentationProperty

      Processing properties for background images. See SegmentationProperty.

    • Optional type: MediaSourceType

      The type of the video source. See MediaSourceType. In this method, this parameter supports only the following two settings: The default value is PrimaryCameraSource. If you want to use the second camera to capture video, set this parameter to SecondaryCameraSource.

    Returns number

  • Enables or disables the voice AI tuner.

    The voice AI tuner supports enhancing sound quality and adjusting tone style.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable the voice AI tuner: true : Enables the voice AI tuner. false : (Default) Disable the voice AI tuner.

    • type: VoiceAiTunerType

      Voice AI tuner sound types, see VoiceAiTunerType.

    Returns number

  • Enables interoperability with the Agora Web SDK (applicable only in the live streaming scenarios).

    Deprecated: The SDK automatically enables interoperability with the Web SDK, so you no longer need to call this method. You can call this method to enable or disable interoperability with the Agora Web SDK. If a channel has Web SDK users, ensure that you call this method, or the video of the Native user will be a black screen for the Web user. This method is only applicable in live streaming scenarios, and interoperability is enabled by default in communication scenarios.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable interoperability: true : Enable interoperability. false : (Default) Disable interoperability.

    Returns number

  • Gets the audio device information.

    After calling this method, you can get whether the audio device supports ultra-low-latency capture and playback. You can call this method either before or after joining a channel.

    Returns

    The DeviceInfo object that identifies the audio device information. Not null: Success. Null: Failure.

    Returns DeviceInfo

  • Retrieves the playback position (ms) of the music file.

    Retrieves the playback position (ms) of the audio. You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged (AudioMixingStatePlaying) callback. If you need to call getAudioMixingCurrentPosition multiple times, ensure that the time interval between calling this method is more than 500 ms.

    Returns

    ≥ 0: The current playback position (ms) of the audio mixing, if this method call succeeds. 0 represents that the current music file does not start playing. < 0: Failure.

    Returns number

  • Retrieves the duration (ms) of the music file.

    Retrieves the total duration (ms) of the audio.

    Returns

    ≥ 0: The audio mixing duration, if this method call succeeds. < 0: Failure.

    Returns number

  • Retrieves the audio mixing volume for local playback.

    You can call this method to get the local playback volume of the mixed audio file, which helps in troubleshooting volume‑related issues.

    Returns

    ≥ 0: The audio mixing volume, if this method call succeeds. The value range is [0,100]. < 0: Failure.

    Returns number

  • Retrieves the audio mixing volume for publishing.

    This method helps troubleshoot audio volume‑related issues. You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged (AudioMixingStatePlaying) callback.

    Returns

    ≥ 0: The audio mixing volume, if this method call succeeds. The value range is [0,100]. < 0: Failure.

    Returns number

  • Gets the index of audio tracks of the current music file.

    You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged (AudioMixingStatePlaying) callback.

    Returns

    The SDK returns the index of the audio tracks if the method call succeeds. < 0: Failure.

    Returns number

  • Gets the maximum zoom ratio supported by the camera.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2).

    Returns

    The maximum zoom factor.

    Returns number

  • Gets the current Monotonic Time of the SDK.

    Monotonic Time refers to a monotonically increasing time series whose value increases over time. The unit is milliseconds. In custom video capture and custom audio capture scenarios, in order to ensure audio and video synchronization, Agora recommends that you call this method to obtain the current Monotonic Time of the SDK, and then pass this value into the timestamp parameter in the captured video frame (VideoFrame) and audio frame (AudioFrame).

    Returns

    ≥0: The method call is successful, and returns the current Monotonic Time of the SDK (in milliseconds). < 0: Failure.

    Returns number

  • Retrieves the playback position of the audio effect file.

    Call this method after playEffect.

    Returns

    The playback position (ms) of the specified audio effect file, if the method call succeeds. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique.

    Returns number

  • Retrieves the duration of the audio effect file.

    Call this method after joining a channel.

    Returns

    The total duration (ms) of the specified audio effect file, if the method call succeeds. < 0: Failure.

    Parameters

    • filePath: string

      File path: Android: The file path, which needs to be accurate to the file name and suffix. Agora supports URL addresses, absolute paths, or file paths that start with /assets/. You might encounter permission issues if you use an absolute path to access a local file, so Agora recommends using a URI address instead. For example : content://com.android.providers.media.documents/document/audio%3A14441 iOS: The absolute path or URL address (including the suffixes of the filename) of the audio effect file. For example: /var/mobile/Containers/Data/audio.mp4.

    Returns number

  • Retrieves the volume of the audio effects.

    The volume is an integer ranging from 0 to 100. The default value is 100, which means the original volume. Call this method after playEffect.

    Returns

    Volume of the audio effects, if this method call succeeds. < 0: Failure.

    Returns number

  • Gets the warning or error description.

    Returns

    The specific error description.

    Parameters

    • code: number

      The error code reported by the SDK.

    Returns string

  • Gets detailed information on the extensions.

    Returns

    The extension information, if the method call succeeds. An empty string, if the method call fails.

    Parameters

    • provider: string

      The name of the extension provider.

    • extension: string

      The name of the extension.

    • key: string

      The key of the extension.

    • bufLen: number

      Maximum length of the JSON string indicating the extension property. The maximum value is 512 bytes.

    • Optional type: MediaSourceType

      Source type of the extension. See MediaSourceType.

    Returns string

  • Gets IMusicContentCenter.

    Returns

    One IMusicContentCenter object.

    Returns IMusicContentCenter

  • Gets the C++ handle of the Native SDK.

    This method retrieves the C++ handle of the SDK, which is used for registering the audio and video frame observer.

    Returns

    The native handle of the SDK.

    Returns number

  • Gets the type of the local network connection.

    You can use this method to get the type of network in use at any stage. You can call this method either before or after joining a channel.

    Returns

    ≥ 0: The method call is successful, and the local network connection type is returned. 0: The SDK disconnects from the network. 1: The network type is LAN. 2: The network type is Wi-Fi (including hotspots). 3: The network type is mobile 2G. 4: The network type is mobile 3G. 5: The network type is mobile 4G. 6: The network type is mobile 5G. < 0: The method call failed with an error code. -1: The network type is unknown.

    Returns number

  • Gets the current NTP (Network Time Protocol) time.

    In the real-time chorus scenario, especially when the downlink connections are inconsistent due to network issues among multiple receiving ends, you can call this method to obtain the current NTP time as the reference time, in order to align the lyrics and music of multiple receiving ends and achieve chorus synchronization.

    Returns

    The Unix timestamp (ms) of the current NTP time.

    Returns number

  • Gets the user information by passing in the user ID.

    After a remote user joins the channel, the SDK gets the UID and user account of the remote user, caches them in a mapping table object, and triggers the onUserInfoUpdated callback on the local client. After receiving the callback, you can call this method and passi in the UID.to get the user account of the specified user from the UserInfo object.

    Returns

    A pointer to the UserInfo instance, if the method call succeeds. If the call fails, returns null.

    Parameters

    • uid: number

      The user ID.

    Returns UserInfo

  • Gets the user information by passing in the user account.

    After a remote user joins the channel, the SDK gets the UID and user account of the remote user, caches them in a mapping table object, and triggers the onUserInfoUpdated callback on the local client. After receiving the callback, you can call this method and pass in the user account to get the UID of the remote user from the UserInfo object.

    Returns

    A pointer to the UserInfo instance, if the method call succeeds. If the call fails, returns null.

    Parameters

    • userAccount: string

      The user account.

    Returns UserInfo

  • Gets the volume of a specified audio effect file.

    Returns

    ≥ 0: Returns the volume of the specified audio effect, if the method call is successful. The value ranges between 0 and 100. 100 represents the original volume. < 0: Failure.

    Parameters

    • soundId: number

      The ID of the audio effect file.

    Returns number

  • All called methods provided by the IRtcEngine class are executed asynchronously. Agora recommends calling these methods in the same thread.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid. -7: The SDK is not initialized. -22: The resource request failed. The SDK fails to allocate resources because your app consumes too much system resource or the system resources are insufficient. -101: The App ID is invalid.

    Parameters

    • context: RtcEngineContext

      Configurations for the IRtcEngine instance. See RtcEngineContext.

    Returns number

  • Checks whether the device supports auto exposure.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2). This method applies to iOS only.

    Returns

    true : The device supports auto exposure. false : The device does not support auto exposure.

    Returns boolean

  • Checks whether the device supports the face auto-focus function.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2).

    Returns

    true : The device supports the face auto-focus function. false : The device does not support the face auto-focus function.

    Returns boolean

  • Check if the camera supports portrait center stage.

    This method applies to iOS only. Before calling enableCameraCenterStage to enable portrait center stage, it is recommended to call this method to check if the current device supports the feature.

    Returns

    true : The current camera supports the portrait center stage. false : The current camera supports the portrait center stage.

    Returns boolean

  • Checks whether the device supports manual exposure.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2).

    Returns

    true : The device supports manual exposure. false : The device does not support manual exposure.

    Returns boolean

  • Queries whether the current camera supports adjusting exposure value.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2). Before calling setCameraExposureFactor, Agora recoomends that you call this method to query whether the current camera supports adjusting the exposure value. By calling this method, you adjust the exposure value of the currently active camera, that is, the camera specified when calling setCameraCapturerConfiguration.

    Returns

    true : Success. false : Failure.

    Returns boolean

  • Checks whether the device camera supports face detection.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2). This method is for Android and iOS only.

    Returns

    true : The device camera supports face detection. false : The device camera does not support face detection.

    Returns boolean

  • Check whether the device supports the manual focus function.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2).

    Returns

    true : The device supports the manual focus function. false : The device does not support the manual focus function.

    Returns boolean

  • Checks whether the device supports camera flash.

    This method must be called after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateEncoding (2). The app enables the front camera by default. If your front camera does not support flash, this method returns false. If you want to check whether the rear camera supports the flash function, call switchCamera before this method. On iPads with system version 15, even if isCameraTorchSupported returns true, you might fail to successfully enable the flash by calling setCameraTorchOn due to system issues.

    Returns

    true : The device supports camera flash. false : The device does not support camera flash.

    Returns boolean

  • Checks whether the device supports camera zoom.

    Returns

    true : The device supports camera zoom. false : The device does not support camera zoom.

    Returns boolean

  • Checks whether the device supports the specified advanced feature.

    Checks whether the capabilities of the current device meet the requirements for advanced features such as virtual background and image enhancement.

    Returns

    true : The current device supports the specified feature. false : The current device does not support the specified feature.

    Parameters

    • type: FeatureType

      The type of the advanced feature, see FeatureType.

    Returns boolean

  • Checks whether the speakerphone is enabled.

    Returns

    true : The speakerphone is enabled, and the audio plays from the speakerphone. false : The speakerphone is not enabled, and the audio plays from devices other than the speakerphone. For example, the headset or earpiece.

    Returns boolean

  • Joins a channel with media options.

    This method supports setting the media options when joining a channel, such as whether to publish audio and video streams within the channel. or whether to automatically subscribe to the audio and video streams of all remote users when joining a channel. By default, the user subscribes to the audio and video streams of all the other users in the channel, giving rise to usage and billings. To stop subscribing to other streams, set the options parameter or call the corresponding mute methods.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the token is invalid, the uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again. -3: Fails to initialize the IRtcEngine object. You need to reinitialize the IRtcEngine object. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -8: The internal state of the IRtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method. -17: The request to join the channel is rejected. The typical cause is that the user is already in the channel. Agora recommends that you use the onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the ConnectionStateDisconnected (1) state. -102: The channel name is invalid. You need to pass in a valid channel name in channelId to rejoin the channel. -121: The user ID is invalid. You need to pass in a valid user ID in uid to rejoin the channel.

    Parameters

    • token: string

      The token generated on your server for authentication. (Recommended) If your project has enabled the security mode (using APP ID and Token for authentication), this parameter is required. If you have only enabled the testing mode (using APP ID for authentication), this parameter is optional. You will automatically exit the channel 24 hours after successfully joining in. If you need to join different channels at the same time or switch between channels, Agora recommends using a wildcard token so that you don't need to apply for a new token every time joining a channel.

    • channelId: string

      The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total): All lowercase English letters: a to z. All uppercase English letters: A to Z. All numeric characters: 0 to 9. "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • uid: number

      The user ID. This parameter is used to identify the user in the channel for real-time audio and video interaction. You need to set and manage user IDs yourself, and ensure that each user ID in the same channel is unique. This parameter is a 32-bit unsigned integer. The value range is 1 to 2 32 -1. If the user ID is not assigned (or set to 0), the SDK assigns a random user ID and onJoinChannelSuccess returns it in the callback. Your application must record and maintain the returned user ID, because the SDK does not do so.

    • options: ChannelMediaOptions

      The channel media options. See ChannelMediaOptions.

    Returns number

  • Join a channel using a user account and token, and set the media options.

    Before calling this method, if you have not called registerLocalUserAccount to register a user account, when you call this method to join a channel, the SDK automatically creates a user account for you. Calling the registerLocalUserAccount method to register a user account, and then calling this method to join a channel can shorten the time it takes to enter the channel. Once a user joins the channel, the user subscribes to the audio and video streams of all the other users in the channel by default, giving rise to usage and billings. To stop subscribing to a specified stream or all remote streams, call the corresponding mute methods. To ensure smooth communication, use the same parameter type to identify the user. For example, if a user joins the channel with a UID, then ensure all the other users use the UID too. The same applies to the user account. If a user joins the channel with the Agora Web SDK, ensure that the ID of the user is set to the same parameter type.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the token is invalid, the uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again. -3: Fails to initialize the IRtcEngine object. You need to reinitialize the IRtcEngine object. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -8: The internal state of the IRtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method. -17: The request to join the channel is rejected. The typical cause is that the user is already in the channel. Agora recommends that you use the onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the ConnectionStateDisconnected (1) state. -102: The channel name is invalid. You need to pass in a valid channel name in channelId to rejoin the channel. -121: The user ID is invalid. You need to pass in a valid user ID in uid to rejoin the channel.

    Parameters

    • token: string

      The token generated on your server for authentication. (Recommended) If your project has enabled the security mode (using APP ID and Token for authentication), this parameter is required. If you have only enabled the testing mode (using APP ID for authentication), this parameter is optional. You will automatically exit the channel 24 hours after successfully joining in. If you need to join different channels at the same time or switch between channels, Agora recommends using a wildcard token so that you don't need to apply for a new token every time joining a channel.

    • channelId: string

      The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total): All lowercase English letters: a to z. All uppercase English letters: A to Z. All numeric characters: 0 to 9. "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • userAccount: string

      The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as null. Supported characters are as follows(89 in total): The 26 lowercase English letters: a to z. The 26 uppercase English letters: A to Z. All numeric characters: 0 to 9. Space "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • Optional options: ChannelMediaOptions

      The channel media options. See ChannelMediaOptions.

    Returns number

  • Join a channel using a user account and token, and set the media options.

    Before calling this method, if you have not called registerLocalUserAccount to register a user account, when you call this method to join a channel, the SDK automatically creates a user account for you. Calling the registerLocalUserAccount method to register a user account, and then calling this method to join a channel can shorten the time it takes to enter the channel. Once a user joins the channel, the user subscribes to the audio and video streams of all the other users in the channel by default, giving rise to usage and billings. If you want to stop subscribing to the media stream of other users, you can set the options parameter or call the corresponding mute method. To ensure smooth communication, use the same parameter type to identify the user. For example, if a user joins the channel with a UID, then ensure all the other users use the UID too. The same applies to the user account. If a user joins the channel with the Agora Web SDK, ensure that the ID of the user is set to the same parameter type.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the token is invalid, the uid parameter is not set to an integer, or the value of a member in ChannelMediaOptions is invalid. You need to pass in a valid parameter and join the channel again. -3: Fails to initialize the IRtcEngine object. You need to reinitialize the IRtcEngine object. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -8: The internal state of the IRtcEngine object is wrong. The typical cause is that after calling startEchoTest to start a call loop test, you call this method to join the channel without calling stopEchoTest to stop the test. You need to call stopEchoTest before calling this method. -17: The request to join the channel is rejected. The typical cause is that the user is already in the channel. Agora recommends that you use the onConnectionStateChanged callback to see whether the user is in the channel. Do not call this method to join the channel unless you receive the ConnectionStateDisconnected (1) state. -102: The channel name is invalid. You need to pass in a valid channel name in channelId to rejoin the channel. -121: The user ID is invalid. You need to pass in a valid user ID in uid to rejoin the channel.

    Parameters

    • token: string

      The token generated on your server for authentication. (Recommended) If your project has enabled the security mode (using APP ID and Token for authentication), this parameter is required. If you have only enabled the testing mode (using APP ID for authentication), this parameter is optional. You will automatically exit the channel 24 hours after successfully joining in. If you need to join different channels at the same time or switch between channels, Agora recommends using a wildcard token so that you don't need to apply for a new token every time joining a channel.

    • channelId: string

      The channel name. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total): All lowercase English letters: a to z. All uppercase English letters: A to Z. All numeric characters: 0 to 9. "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • userAccount: string

      The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as null. Supported characters are as follows(89 in total): The 26 lowercase English letters: a to z. The 26 uppercase English letters: A to Z. All numeric characters: 0 to 9. Space "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • options: ChannelMediaOptions

      The channel media options. See ChannelMediaOptions.

    Returns number

  • Sets channel options and leaves the channel.

    After calling this method, the SDK terminates the audio and video interaction, leaves the current channel, and releases all resources related to the session. After joining the channel, you must call this method to end the call; otherwise, the next call cannot be started. If you have called joinChannelEx to join multiple channels, calling this method will leave all the channels you joined. This method call is asynchronous. When this method returns, it does not necessarily mean that the user has left the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • Optional options: LeaveChannelOptions

      The options for leaving the channel. See LeaveChannelOptions.

    Returns number

  • Loads an extension.

    This method is used to add extensions external to the SDK (such as those from Extensions Marketplace and SDK extensions) to the SDK.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • path: string

      The extension library path and name. For example: /library/libagora_segmentation_extension.dll.

    • Optional unloadAfterUse: boolean

      Whether to uninstall the current extension when you no longer using it: true : Uninstall the extension when the IRtcEngine is destroyed. false : (Rcommended) Do not uninstall the extension until the process terminates.

    Returns number

  • Stops or resumes subscribing to the audio streams of all remote users.

    After successfully calling this method, the local user stops or resumes subscribing to the audio streams of all remote users, including all subsequent users. By default, the SDK subscribes to the audio streams of all remote users when joining a channel. To modify this behavior, you can set autoSubscribeAudio to false when calling joinChannel to join the channel, which will cancel the subscription to the audio streams of all users upon joining the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mute: boolean

      Whether to stop subscribing to the audio streams of all remote users: true : Stops subscribing to the audio streams of all remote users. false : (Default) Subscribes to the audio streams of all remote users by default.

    Returns number

  • Stops or resumes subscribing to the video streams of all remote users.

    After successfully calling this method, the local user stops or resumes subscribing to the video streams of all remote users, including all subsequent users. By default, the SDK subscribes to the video streams of all remote users when joining a channel. To modify this behavior, you can set autoSubscribeVideo to false when calling joinChannel to join the channel, which will cancel the subscription to the video streams of all users upon joining the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mute: boolean

      Whether to stop subscribing to the video streams of all remote users. true : Stop subscribing to the video streams of all remote users. false : (Default) Subscribe to the video streams of all remote users by default.

    Returns number

  • Stops or resumes publishing the local audio stream.

    This method is used to control whether to publish the locally captured audio stream. If you call this method to stop publishing locally captured audio streams, the audio capturing device will still work and won't be affected.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mute: boolean

      Whether to stop publishing the local audio stream: true : Stops publishing the local audio stream. false : (Default) Resumes publishing the local audio stream.

    Returns number

  • Stops or resumes publishing the local video stream.

    This method is used to control whether to publish the locally captured video stream. If you call this method to stop publishing locally captured video streams, the video capturing device will still work and won't be affected. Compared to enableLocalVideo (false), which can also cancel the publishing of local video stream by turning off the local video stream capture, this method responds faster.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mute: boolean

      Whether to stop publishing the local video stream. true : Stop publishing the local video stream. false : (Default) Publish the local video stream.

    Returns number

  • Whether to mute the recording signal.

    If you have already called adjustRecordingSignalVolume to adjust the recording signal volume, when you call this method and set it to true, the SDK behaves as follows: Records the adjusted volume. Mutes the recording signal. When you call this method again and set it to false, the recording signal volume will be restored to the volume recorded by the SDK before muting.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mute: boolean

      true : Mute the recording signal. false : (Default) Do not mute the recording signal.

    Returns number

  • Stops or resumes subscribing to the audio stream of a specified user.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the specified user.

    • mute: boolean

      Whether to subscribe to the specified remote user's audio stream. true : Stop subscribing to the audio stream of the specified user. false : (Default) Subscribe to the audio stream of the specified user.

    Returns number

  • Stops or resumes subscribing to the video stream of a specified user.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the specified user.

    • mute: boolean

      Whether to subscribe to the specified remote user's video stream. true : Stop subscribing to the video streams of the specified user. false : (Default) Subscribe to the video stream of the specified user.

    Returns number

  • Pauses the media stream relay to all target channels.

    After the cross-channel media stream relay starts, you can call this method to pause relaying media streams to all target channels; after the pause, if you want to resume the relay, call resumeAllChannelMediaRelay. Call this method after startOrUpdateChannelMediaRelay.

    Returns

    0: Success. < 0: Failure. -5: The method call was rejected. There is no ongoing channel media relay.

    Returns number

  • Pauses all audio effects.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Pauses playing and mixing the music file.

    After calling startAudioMixing to play a music file, you can call this method to pause the playing. If you need to stop the playback, call stopAudioMixing.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Pauses a specified audio effect file.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique.

    Returns number

  • Plays all audio effect files.

    After calling preloadEffect multiple times to preload multiple audio effects into the memory, you can call this method to play all the specified audio effects for all users in the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • loopCount: number

      The number of times the audio effect loops: -1: Play the audio effect files in an indefinite loop until you call stopEffect or stopAllEffects. 0: Play the audio effect once. 1: Play the audio effect twice.

    • pitch: number

      The pitch of the audio effect. The value ranges between 0.5 and 2.0. The default value is 1.0 (original pitch). The lower the value, the lower the pitch.

    • pan: number

      The spatial position of the audio effect. The value ranges between -1.0 and 1.0: -1.0: The audio effect shows on the left. 0: The audio effect shows ahead. 1.0: The audio effect shows on the right.

    • gain: number

      The volume of the audio effect. The value range is [0, 100]. The default value is 100 (original volume). The smaller the value, the lower the volume.

    • Optional publish: boolean

      Whether to publish the audio effect to the remote users: true : Publish the audio effect to the remote users. Both the local user and remote users can hear the audio effect. false : (Default) Do not publish the audio effect to the remote users. Only the local user can hear the audio effect.

    Returns number

  • Plays the specified local or online audio effect file.

    To play multiple audio effect files at the same time, call this method multiple times with different soundId and filePath. To achieve the optimal user experience, Agora recommends that you do not playing more than three audio files at the same time.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique. If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of soundId in preloadEffect.

    • filePath: string

      The file path. The SDK supports URLs and absolute path of local files. The absolute path needs to be accurate to the file name and extension. Supported audio formats include MP3, AAC, M4A, MP4, WAV, and 3GP. If you have preloaded an audio effect into memory by calling preloadEffect, ensure that the value of this parameter is the same as that of filePath in preloadEffect.

    • loopCount: number

      The number of times the audio effect loops. ≥ 0: The number of playback times. For example, 1 means looping one time, which means playing the audio effect two times in total. -1: Play the audio file in an infinite loop.

    • pitch: number

      The pitch of the audio effect. The value range is 0.5 to 2.0. The default value is 1.0, which means the original pitch. The lower the value, the lower the pitch.

    • pan: number

      The spatial position of the audio effect. The value ranges between -1.0 and 1.0: -1.0: The audio effect is heard on the left of the user. 0.0: The audio effect is heard in front of the user. 1.0: The audio effect is heard on the right of the user.

    • gain: number

      The volume of the audio effect. The value range is 0.0 to 100.0. The default value is 100.0, which means the original volume. The smaller the value, the lower the volume.

    • Optional publish: boolean

      Whether to publish the audio effect to the remote users: true : Publish the audio effect to the remote users. Both the local user and remote users can hear the audio effect. false : Do not publish the audio effect to the remote users. Only the local user can hear the audio effect.

    • Optional startPos: number

      The playback position (ms) of the audio effect file.

    Returns number

  • Preloads a channel with token, channelId, and uid.

    When audience members need to switch between different channels frequently, calling the method can help shortening the time of joining a channel, thus reducing the time it takes for audience members to hear and see the host. If you join a preloaded channel, leave it and want to rejoin the same channel, you do not need to call this method unless the token for preloading the channel expires. Failing to preload a channel does not mean that you can't join a channel, nor will it increase the time of joining a channel.

    Returns

    0: Success. < 0: Failure. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -102: The channel name is invalid. You need to pass in a valid channel name and join the channel again.

    Parameters

    • token: string

      The token generated on your server for authentication. When the token for preloading channels expires, you can update the token based on the number of channels you preload. When preloading one channel, calling this method to pass in the new token. When preloading more than one channels: If you use a wildcard token for all preloaded channels, call updatePreloadChannelToken to update the token. When generating a wildcard token, ensure the user ID is not set as 0. If you use different tokens to preload different channels, call this method to pass in your user ID, channel name and the new token.

    • channelId: string

      The channel name that you want to preload. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total): All lowercase English letters: a to z. All uppercase English letters: A to Z. All numeric characters: 0 to 9. "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • uid: number

      The user ID. This parameter is used to identify the user in the channel for real-time audio and video interaction. You need to set and manage user IDs yourself, and ensure that each user ID in the same channel is unique. This parameter is a 32-bit unsigned integer. The value range is 1 to 2 32 -1. If the user ID is not assigned (or set to 0), the SDK assigns a random user ID and onJoinChannelSuccess returns it in the callback. Your application must record and maintain the returned user ID, because the SDK does not do so.

    Returns number

  • Preloads a channel with token, channelId, and userAccount.

    When audience members need to switch between different channels frequently, calling the method can help shortening the time of joining a channel, thus reducing the time it takes for audience members to hear and see the host. If you join a preloaded channel, leave it and want to rejoin the same channel, you do not need to call this method unless the token for preloading the channel expires. Failing to preload a channel does not mean that you can't join a channel, nor will it increase the time of joining a channel.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the User Account is empty. You need to pass in a valid parameter and join the channel again. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -102: The channel name is invalid. You need to pass in a valid channel name and join the channel again.

    Parameters

    • token: string

      The token generated on your server for authentication. When the token for preloading channels expires, you can update the token based on the number of channels you preload. When preloading one channel, calling this method to pass in the new token. When preloading more than one channels: If you use a wildcard token for all preloaded channels, call updatePreloadChannelToken to update the token. When generating a wildcard token, ensure the user ID is not set as 0. If you use different tokens to preload different channels, call this method to pass in your user ID, channel name and the new token.

    • channelId: string

      The channel name that you want to preload. This parameter signifies the channel in which users engage in real-time audio and video interaction. Under the premise of the same App ID, users who fill in the same channel ID enter the same channel for audio and video interaction. The string length must be less than 64 bytes. Supported characters (89 characters in total): All lowercase English letters: a to z. All uppercase English letters: A to Z. All numeric characters: 0 to 9. "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    • userAccount: string

      The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as null. Supported characters are as follows(89 in total): The 26 lowercase English letters: a to z. The 26 uppercase English letters: A to Z. All numeric characters: 0 to 9. Space "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    Returns number

  • Preloads a specified audio effect file into the memory.

    Ensure the size of all preloaded files does not exceed the limit. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique.

    • filePath: string

      File path: Android: The file path, which needs to be accurate to the file name and suffix. Agora supports URL addresses, absolute paths, or file paths that start with /assets/. You might encounter permission issues if you use an absolute path to access a local file, so Agora recommends using a URI address instead. For example : content://com.android.providers.media.documents/document/audio%3A14441 iOS: The absolute path or URL address (including the suffixes of the filename) of the audio effect file. For example: /var/mobile/Containers/Data/audio.mp4.

    • Optional startPos: number

      The playback position (ms) of the audio effect file.

    Returns number

  • Queries the focal length capability supported by the camera.

    If you want to enable the wide-angle or ultra-wide-angle mode for camera capture, it is recommended to start by calling this method to check whether the device supports the required focal length capability. Then, adjust the camera's focal length configuration based on the query result by calling setCameraCapturerConfiguration, ensuring the best camera capture performance.

    Returns

    Returns an object containing the following properties: focalLengthInfos : An array of FocalLengthInfo objects, which contain the camera's orientation and focal length type. size : The number of focal length information items retrieved.

    Returns {
        focalLengthInfos: FocalLengthInfo[];
        size: number;
    }

  • Queries the video codec capabilities of the SDK.

    Returns

    If the call is successful, an object containing the following attributes is returned: codecInfo : The CodecCapInfo array, indicating the video codec capabillity of the device. size : The size of the CodecCapInfo array. If the call timeouts, please modify the call logic and do not invoke the method in the main thread.

    Returns {
        codecInfo: CodecCapInfo[];
        size: number;
    }

  • Queries device score.

    Returns

    0: The method call succeeeds, the value is the current device's score, the range is [0,100], the larger the value, the stronger the device capability. Most devices are rated between 60 and 100. < 0: Failure.

    Returns number

  • Queries the highest frame rate supported by the device during screen sharing.

    Returns

    The highest frame rate supported by the device, if the method is called successfully. See ScreenCaptureFramerateCapability. < 0: Failure.

    Returns number

  • Allows a user to rate a call after the call ends.

    Ensure that you call this method after leaving a channel.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid.

    Parameters

    • callId: string

      The current call ID. You can get the call ID by calling getCallId.

    • rating: number

      The value is between 1 (the lowest score) and 5 (the highest score).

    • description: string

      A description of the call. The string length should be less than 800 bytes.

    Returns number

  • Registers an encoded audio observer.

    Call this method after joining a channel. You can call this method or startAudioRecording to set the recording type and quality of audio files, but Agora does not recommend using this method and startAudioRecording at the same time. Only the method called later will take effect.

    Returns

    One IAudioEncodedFrameObserver object.

    Returns number

  • Register an audio spectrum observer.

    After successfully registering the audio spectrum observer and calling enableAudioSpectrumMonitor to enable the audio spectrum monitoring, the SDK reports the callback that you implement in the IAudioSpectrumObserver class according to the time interval you set. You can call this method either before or after joining a channel.

    Returns

    One IAudioSpectrumObserver object.

    Parameters

    Returns number

  • Adds event handlers

    The SDK uses the IRtcEngineEventHandler class to send callbacks to the app. The app inherits the methods of this class to receive these callbacks. All methods in this class have default (empty) implementations. Therefore, apps only need to inherits callbacks according to the scenarios. In the callbacks, avoid time-consuming tasks or calling APIs that can block the thread, such as the sendStreamMessage method. Otherwise, the SDK may not work properly.

    Returns

    true : Success. false : Failure.

    Parameters

    Returns boolean

  • Registers an extension.

    For extensions external to the SDK (such as those from Extensions Marketplace and SDK Extensions), you need to load them before calling this method. Extensions internal to the SDK (those included in the full SDK package) are automatically loaded and registered after the initialization of IRtcEngine.

    Parameters

    • provider: string

      The name of the extension provider.

    • extension: string

      The name of the extension.

    • Optional type: MediaSourceType

      Source type of the extension. See MediaSourceType.

    Returns number

  • Registers a user account.

    Once registered, the user account can be used to identify the local user when the user joins the channel. After the registration is successful, the user account can identify the identity of the local user, and the user can use it to join the channel. This method is optional. If you want to join a channel using a user account, you can choose one of the following methods: Call the registerLocalUserAccount method to register a user account, and then call the joinChannelWithUserAccount method to join a channel, which can shorten the time it takes to enter the channel. Call the joinChannelWithUserAccount method to join a channel. Ensure that the userAccount is unique in the channel. To ensure smooth communication, use the same parameter type to identify the user. For example, if a user joins the channel with a UID, then ensure all the other users use the UID too. The same applies to the user account. If a user joins the channel with the Agora Web SDK, ensure that the ID of the user is set to the same parameter type.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • appId: string

      The App ID of your project on Agora Console.

    • userAccount: string

      The user account. This parameter is used to identify the user in the channel for real-time audio and video engagement. You need to set and manage user accounts yourself and ensure that each user account in the same channel is unique. The maximum length of this parameter is 255 bytes. Ensure that you set this parameter and do not set it as null. Supported characters are as follow(89 in total): The 26 lowercase English letters: a to z. The 26 uppercase English letters: A to Z. All numeric characters: 0 to 9. Space "!", "#", "$", "%", "&", "(", ")", "+", "-", ":", ";", "<", "=", ".", ">", "?", "@", "[", "]", "^", "_", "{", "}", "|", "~", ","

    Returns number

  • Registers the metadata observer.

    You need to implement the IMetadataObserver class and specify the metadata type in this method. This method enables you to add synchronized metadata in the video stream for more diversified live interactive streaming, such as sending shopping links, digital coupons, and online quizzes. Call this method before joinChannel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • observer: IMetadataObserver

      The metadata observer. See IMetadataObserver.

    • type: MetadataType

      The metadata type. The SDK currently only supports VideoMetadata. See MetadataType.

    Returns number

  • Releases the IRtcEngine instance.

    This method releases all resources used by the Agora SDK. Use this method for apps in which users occasionally make voice or video calls. When users do not make calls, you can free up resources for other operations. After a successful method call, you can no longer use any method or callback in the SDK anymore. If you want to use the real-time communication functions again, you must call createAgoraRtcEngine and initialize to create a new IRtcEngine instance. This method can be called synchronously. You need to wait for the resource of IRtcEngine to be released before performing other operations (for example, create a new IRtcEngine object). Therefore, Agora recommends calling this method in the child thread to avoid blocking the main thread. Besides, Agora does not recommend you calling release in any callback of the SDK. Otherwise, the SDK cannot release the resources until the callbacks return results, which may result in a deadlock.

    Parameters

    • Optional sync: boolean

      Whether the method is called synchronously: true : Synchronous call. false : Asynchronous call. Currently this method only supports synchronous calls. Do not set this parameter to this value.

    Returns void

  • Removes the specified IRtcEngineEvent listener. For listened events, if you no longer need to receive the callback message, you can call this method to remove the corresponding listener.

    Type Parameters

    Parameters

    • eventType: EventType

      The name of the target event to listen for. See IRtcEngineEvent.

    • Optional listener: IRtcEngineEvent[EventType]

      The callback function for eventType. Must pass in the same function object in addListener . Take removing the listener for onJoinChannelSuccess as an example: // Create an onJoinChannelSuccess object const onJoinChannelSuccess = (connection: RtcConnection, elapsed: number) => {}; // Add one onJoinChannelSuccess listener engine.addListener('onJoinChannelSuccess', onJoinChannelSuccess); // Remove the onJoinChannelSuccess listener engine.removeListener('onJoinChannelSuccess', onJoinChannelSuccess);

    Returns void

  • Renews the token.

    You can call this method to pass a new token to the SDK. A token will expire after a certain period of time, at which point the SDK will be unable to establish a connection with the server.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the token is empty. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. 110: Invalid token. Ensure the following: The user ID specified when generating the token is consistent with the user ID used when joining the channel. The generated token is the same as the token passed in to join the channel.

    Parameters

    • token: string

      The new token.

    Returns number

  • Resumes the media stream relay to all target channels.

    After calling the pauseAllChannelMediaRelay method, you can call this method to resume relaying media streams to all destination channels. Call this method after pauseAllChannelMediaRelay.

    Returns

    0: Success. < 0: Failure. -5: The method call was rejected. There is no paused channel media relay.

    Returns number

  • Resumes playing all audio effect files.

    After you call pauseAllEffects to pause the playback, you can call this method to resume the playback.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Resumes playing and mixing the music file.

    After calling pauseAudioMixing to pause the playback, you can call this method to resume the playback.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Resumes playing a specified audio effect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique.

    Returns number

  • Selects the audio track used during playback.

    After getting the track index of the audio file, you can call this method to specify any track to play. For example, if different tracks of a multi-track file store songs in different languages, you can call this method to set the playback language. For the supported formats of audio files, see. You need to call this method after calling startAudioMixing and receiving the onAudioMixingStateChanged (AudioMixingStatePlaying) callback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • index: number

      The audio track you want to specify. The value should be greater than 0 and less than that of returned by getAudioTrackCount.

    Returns number

  • Reports customized messages.

    Agora supports reporting and analyzing customized messages. This function is in the beta stage with a free trial. The ability provided in its beta test version is reporting a maximum of 10 message pieces within 6 seconds, with each message piece not exceeding 256 bytes and each string not exceeding 100 bytes. To try out this function, contact and discuss the format of customized messages with us.

    Parameters

    • id: string
    • category: string
    • event: string
    • label: string
    • value: number

    Returns number

  • Sends media metadata.

    If the metadata is sent successfully, the SDK triggers the onMetadataReceived callback on the receiver.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • metadata: Metadata

      Media metadata. See Metadata.

    • sourceType: VideoSourceType

      The type of the video source. See VideoSourceType.

    Returns number

  • Sends data stream messages.

    After calling createDataStream, you can call this method to send data stream messages to all users in the channel. The SDK has the following restrictions on this method: Each client within the channel can have up to 5 data channels simultaneously, with a total shared packet bitrate limit of 30 KB/s for all data channels. Each data channel can send up to 60 packets per second, with each packet being a maximum of 1 KB. A successful method call triggers the onStreamMessage callback on the remote client, from which the remote user gets the stream message. A failed method call triggers the onStreamMessageError callback on the remote client. This method needs to be called after createDataStream and joining the channel. In live streaming scenarios, this method only applies to hosts.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • streamId: number

      The data stream ID. You can get the data stream ID by calling createDataStream.

    • data: Uint8Array

      The message to be sent.

    • length: number

      The length of the data.

    Returns number

  • Sets whether to enable the AI ​​noise suppression function and set the noise suppression mode.

    You can call this method to enable AI noise suppression function. Once enabled, the SDK automatically detects and reduces stationary and non-stationary noise from your audio on the premise of ensuring the quality of human voice. Stationary noise refers to noise signal with constant average statistical properties and negligibly small fluctuations of level within the period of observation. Common sources of stationary noises are: Television; Air conditioner; Machinery, etc. Non-stationary noise refers to noise signal with huge fluctuations of level within the period of observation; common sources of non-stationary noises are: Thunder; Explosion; Cracking, etc.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable the AI noise suppression function: true : Enable the AI noise suppression. false : (Default) Disable the AI noise suppression.

    • mode: AudioAinsMode

      The AI noise suppression modes. See AudioAinsMode.

    Returns number

  • Sets audio advanced options.

    If you have advanced audio processing requirements, such as capturing and sending stereo audio, you can call this method to set advanced audio options. Call this method after calling joinChannel, enableAudio and enableLocalAudio.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • options: AdvancedAudioOptions

      The advanced options for audio. See AdvancedAudioOptions.

    • Optional sourceType: number

    Returns number

  • Sets parameters for SDK preset audio effects.

    To achieve better vocal effects, it is recommended that you call the following APIs before calling this method: Call setAudioScenario to set the audio scenario to high-quality audio scenario, namely AudioScenarioGameStreaming (3). Call setAudioProfile to set the profile parameter to AudioProfileMusicHighQuality (4) or AudioProfileMusicHighQualityStereo (5). Call this method to set the following parameters for the local user who sends an audio stream: 3D voice effect: Sets the cycle period of the 3D voice effect. Pitch correction effect: Sets the basic mode and tonic pitch of the pitch correction effect. Different songs have different modes and tonic pitches. Agora recommends bounding this method with interface elements to enable users to adjust the pitch correction interactively. After setting the audio parameters, all users in the channel can hear the effect. Do not set the profile parameter in setAudioProfile to AudioProfileSpeechStandard (1) or AudioProfileIot (6), or the method does not take effect. You can call this method either before or after joining a channel. This method has the best effect on human voice processing, and Agora does not recommend calling this method to process audio data containing music. After calling setAudioEffectParameters, Agora does not recommend you to call the following methods, otherwise the effect set by setAudioEffectParameters will be overwritten: setAudioEffectPreset setVoiceBeautifierPreset setLocalVoicePitch setLocalVoiceEqualization setLocalVoiceReverb setVoiceBeautifierParameters setVoiceConversionPreset This method relies on the voice beautifier dynamic library libagora_audio_beauty_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • preset: AudioEffectPreset

      The options for SDK preset audio effects: RoomAcoustics3dVoice, 3D voice effect: You need to set the profile parameter in setAudioProfile to AudioProfileMusicStandardStereo (3) or AudioProfileMusicHighQualityStereo (5) before setting this enumerator; otherwise, the enumerator setting does not take effect. If the 3D voice effect is enabled, users need to use stereo audio playback devices to hear the anticipated voice effect. PitchCorrection, Pitch correction effect:

    • param1: number

      If you set preset to RoomAcoustics3dVoice, param1 sets the cycle period of the 3D voice effect. The value range is [1,60] and the unit is seconds. The default value is 10, indicating that the voice moves around you every 10 seconds. If you set preset to PitchCorrection, param1 indicates the basic mode of the pitch correction effect: 1 : (Default) Natural major scale. 2 : Natural minor scale. 3 : Japanese pentatonic scale.

    • param2: number

      If you set preset to RoomAcoustics3dVoice , you need to set param2 to 0. If you set preset to PitchCorrection, param2 indicates the tonic pitch of the pitch correction effect: 1 : A 2 : A# 3 : B 4 : (Default) C 5 : C# 6 : D 7 : D# 8 : E 9 : F 10 : F# 11 : G 12 : G#

    Returns number

  • Sets an SDK preset audio effect.

    Call this method to set an SDK preset audio effect for the local user who sends an audio stream. This audio effect does not change the gender characteristics of the original voice. After setting an audio effect, all users in the channel can hear the effect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • preset: AudioEffectPreset

      The options for SDK preset audio effects. See AudioEffectPreset.

    Returns number

  • Sets the channel mode of the current audio file.

    In a stereo music file, the left and right channels can store different audio data. According to your needs, you can set the channel mode to original mode, left channel mode, right channel mode, or mixed channel mode.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets the pitch of the local music file.

    When a local music file is mixed with a local human voice, call this method to set the pitch of the local music file only.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • pitch: number

      Sets the pitch of the local music file by the chromatic scale. The default value is 0, which means keeping the original pitch. The value ranges from -12 to 12, and the pitch value between consecutive values is a chromatic value. The greater the absolute value of this parameter, the higher or lower the pitch of the local music file.

    Returns number

  • Sets the playback speed of the current audio file.

    Ensure you call this method after calling startAudioMixing receiving the onAudioMixingStateChanged callback reporting the state as AudioMixingStatePlaying.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • speed: number

      The playback speed. Agora recommends that you set this to a value between 50 and 400, defined as follows: 50: Half the original speed. 100: The original speed. 400: 4 times the original speed.

    Returns number

  • Sets the audio mixing position.

    Call this method to set the playback position of the music file to a different starting position (the default plays from the beginning).

    Returns

    0: Success. < 0: Failure.

    Parameters

    • pos: number

      Integer. The playback position (ms).

    Returns number

  • Sets the audio profile and audio scenario.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • profile: AudioProfileType

      The audio profile, including the sampling rate, bitrate, encoding mode, and the number of channels. See AudioProfileType.

    • Optional scenario: AudioScenarioType

      The audio scenarios. Under different audio scenarios, the device uses different volume types. See AudioScenarioType.

    Returns number

  • Sets audio scenarios.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • scenario: AudioScenarioType

      The audio scenarios. Under different audio scenarios, the device uses different volume types. See AudioScenarioType.

    Returns number

  • Sets the operational permission of the SDK on the audio session.

    The SDK and the app can both configure the audio session by default. If you need to only use the app to configure the audio session, this method restricts the operational permission of the SDK on the audio session. You can call this method either before or after joining a channel. Once you call this method to restrict the operational permission of the SDK on the audio session, the restriction takes effect when the SDK needs to change the audio session. This method is only available for iOS platforms. This method does not restrict the operational permission of the app on the audio session.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • restriction: AudioSessionOperationRestriction

      The operational permission of the SDK on the audio session. See AudioSessionOperationRestriction. This parameter is in bit mask format, and each bit corresponds to a permission.

    Returns number

  • Sets the image enhancement options.

    Enables or disables image enhancement, and sets the options.

    Returns

    0: Success. < 0: Failure. -4: The current device does not support this feature. Possible reasons include: The current device capabilities do not meet the requirements for image enhancement. Agora recommends you replace it with a high-performance device. The current device version is lower than Android 5.0 and does not support this feature. Agora recommends you replace the device or upgrade the operating system.

    Parameters

    • enabled: boolean

      Whether to enable the image enhancement function: true : Enable the image enhancement function. false : (Default) Disable the image enhancement function.

    • options: BeautyOptions

      The image enhancement options. See BeautyOptions.

    • Optional type: MediaSourceType

      Source type of the extension. See MediaSourceType.

    Returns number

  • Sets whether to enable auto exposure.

    You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1). This method applies to iOS only.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable auto exposure: true : Enable auto exposure. false : Disable auto exposure.

    Returns number

  • Enables the camera auto-face focus function.

    By default, the SDK disables face autofocus on Android and enables face autofocus on iOS. To set face autofocus, call this method.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable face autofocus: true : Enable the camera auto-face focus function. false : Disable face autofocus.

    Returns number

  • Sets the camera capture configuration.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • config: CameraCapturerConfiguration

      The camera capture configuration. See CameraCapturerConfiguration. In this method, you do not need to set the deviceId parameter.

    Returns number

  • Sets the rotation angle of the captured video.

    You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1). When the video capture device does not have the gravity sensing function, you can call this method to manually adjust the rotation angle of the captured video.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets the camera exposure value.

    Insufficient or excessive lighting in the shooting environment can affect the image quality of video capture. To achieve optimal video quality, you can use this method to adjust the camera's exposure value. You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1). Before calling this method, Agora recommends calling isCameraExposureSupported to check whether the current camera supports adjusting the exposure value. By calling this method, you adjust the exposure value of the currently active camera, that is, the camera specified when calling setCameraCapturerConfiguration.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • factor: number

      The camera exposure value. The default value is 0, which means using the default exposure of the camera. The larger the value, the greater the exposure. When the video image is overexposed, you can reduce the exposure value; when the video image is underexposed and the dark details are lost, you can increase the exposure value. If the exposure value you specified is beyond the range supported by the device, the SDK will automatically adjust it to the actual supported range of the device. On Android, the value range is [-20.0, 20.0]. On iOS, the value range is [-8.0, 8.0].

    Returns number

  • Sets the camera exposure position.

    You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1). After a successful method call, the SDK triggers the onCameraExposureAreaChanged callback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • positionXinView: number

      The horizontal coordinate of the touchpoint in the view.

    • positionYinView: number

      The vertical coordinate of the touchpoint in the view.

    Returns number

  • Sets the camera manual focus position.

    You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1). After a successful method call, the SDK triggers the onCameraFocusAreaChanged callback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • positionX: number

      The horizontal coordinate of the touchpoint in the view.

    • positionY: number

      The vertical coordinate of the touchpoint in the view.

    Returns number

  • Set the camera stabilization mode.

    This method applies to iOS only. The camera stabilization mode is off by default. You need to call this method to turn it on and set the appropriate stabilization mode.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Enables the camera flash.

    You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).

    Returns

    0: Success. < 0: Failure.

    Parameters

    • isOn: boolean

      Whether to turn on the camera flash: true : Turn on the flash. false : (Default) Turn off the flash.

    Returns number

  • Sets the camera zoom factor.

    For iOS devices equipped with multi-lens rear cameras, such as those featuring dual-camera (wide-angle and ultra-wide-angle) or triple-camera (wide-angle, ultra-wide-angle, and telephoto), you can call setCameraCapturerConfiguration first to set the cameraFocalLengthType as CameraFocalLengthDefault (0) (standard lens). Then, adjust the camera zoom factor to a value less than 1.0. This configuration allows you to capture video with an ultra-wide-angle perspective. You must call this method after enableVideo. The setting result will take effect after the camera is successfully turned on, that is, after the SDK triggers the onLocalVideoStateChanged callback and returns the local video state as LocalVideoStreamStateCapturing (1).

    Returns

    The camera zoom factor value, if successful. < 0: if the method if failed.

    Parameters

    • factor: number

      The camera zoom factor. For devices that do not support ultra-wide-angle, the value ranges from 1.0 to the maximum zoom factor; for devices that support ultra-wide-angle, the value ranges from 0.5 to the maximum zoom factor. You can get the maximum zoom factor supported by the device by calling the getCameraMaxZoomFactor method.

    Returns number

  • Sets the channel profile.

    You can call this method to set the channel profile. The SDK adopts different optimization strategies for different channel profiles. For example, in a live streaming scenario, the SDK prioritizes video quality. After initializing the SDK, the default channel profile is the live streaming profile. In different channel scenarios, the default audio routing of the SDK is different. See setDefaultAudioRouteToSpeakerphone.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -7: The SDK is not initialized.

    Parameters

    Returns number

  • Set the user role and the audience latency level in a live streaming scenario.

    By default,the SDK sets the user role as audience. You can call this method to set the user role as host. The user role (roles) determines the users' permissions at the SDK level, including whether they can publish audio and video streams in a channel.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid. -5: The request is rejected. -7: The SDK is not initialized.

    Parameters

    • role: ClientRoleType

      The user role. See ClientRoleType. If you set the user role as an audience member, you cannot publish audio and video streams in the channel. If you want to publish media streams in a channel during live streaming, ensure you set the user role as broadcaster.

    • Optional options: ClientRoleOptions

      The detailed options of a user, including the user level. See ClientRoleOptions.

    Returns number

  • Sets up cloud proxy service.

    When users' network access is restricted by a firewall, configure the firewall to allow specific IP addresses and ports provided by Agora; then, call this method to enable the cloud proxyType and set the cloud proxy type with the proxyType parameter. After successfully connecting to the cloud proxy, the SDK triggers the onConnectionStateChanged (ConnectionStateConnecting, ConnectionChangedSettingProxyServer) callback. To disable the cloud proxy that has been set, call the setCloudProxy (NoneProxy). To change the cloud proxy type that has been set, call the setCloudProxy (NoneProxy) first, and then call the setCloudProxy to set the proxyType you want. Agora recommends that you call this method before joining a channel. When a user is behind a firewall and uses the Force UDP cloud proxy, the services for Media Push and cohosting across channels are not available. When you use the Force TCP cloud proxy, note that an error would occur when calling the startAudioMixing method to play online music files in the HTTP protocol. The services for Media Push and cohosting across channels use the cloud proxy with the TCP protocol.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -7: The SDK is not initialized.

    Parameters

    • proxyType: CloudProxyType

      The type of the cloud proxy. See CloudProxyType. This parameter is mandatory. The SDK reports an error if you do not pass in a value.

    Returns number

  • Sets color enhancement.

    The video images captured by the camera can have color distortion. The color enhancement feature intelligently adjusts video characteristics such as saturation and contrast to enhance the video color richness and color reproduction, making the video more vivid. You can call this method to enable the color enhancement feature and set the options of the color enhancement effect. Call this method after calling enableVideo. The color enhancement feature has certain performance requirements on devices. With color enhancement turned on, Agora recommends that you change the color enhancement level to one that consumes less performance or turn off color enhancement if your device is experiencing severe heat problems. Both this method and setExtensionProperty can enable color enhancement: When you use the SDK to capture video, Agora recommends this method (this method only works for video captured by the SDK). When you use an external video source to implement custom video capture, or send an external video source to the SDK, Agora recommends using setExtensionProperty. This method relies on the image enhancement dynamic library libagora_clear_vision_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable color enhancement: true Enable color enhancement. false : (Default) Disable color enhancement.

    • options: ColorEnhanceOptions

      The color enhancement options. See ColorEnhanceOptions.

    • Optional type: MediaSourceType

      The type of the video source. See MediaSourceType.

    Returns number

  • Sets the default audio playback route.

    Most mobile phones have two audio routes: an earpiece at the top, and a speakerphone at the bottom. The earpiece plays at a lower volume, and the speakerphone at a higher volume. When setting the default audio route, you determine whether audio playback comes through the earpiece or speakerphone when no external audio device is connected. In different scenarios, the default audio routing of the system is also different. See the following: Voice call: Earpiece. Audio broadcast: Speakerphone. Video call: Speakerphone. Video broadcast: Speakerphone. You can call this method to change the default audio route. After calling this method to set the default audio route, the actual audio route of the system will change with the connection of external audio devices (wired headphones or Bluetooth headphones).

    Returns

    0: Success. < 0: Failure.

    Parameters

    • defaultToSpeaker: boolean

      Whether to set the speakerphone as the default audio route: true : Set the speakerphone as the default audio route. false : Set the earpiece as the default audio route.

    Returns number

  • Sets the audio profile of the audio streams directly pushed to the CDN by the host.

    When you set the publishMicrophoneTrack or publishCustomAudioTrack in the DirectCdnStreamingMediaOptions as true to capture audios, you can call this method to set the audio profile.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • profile: AudioProfileType

      The audio profile, including the sampling rate, bitrate, encoding mode, and the number of channels. See AudioProfileType.

    Returns number

  • Sets the video profile of the media streams directly pushed to the CDN by the host.

    This method only affects video streams captured by cameras or screens, or from custom video capture sources. That is, when you set publishCameraTrack or publishCustomVideoTrack in DirectCdnStreamingMediaOptions as true to capture videos, you can call this method to set the video profiles. If your local camera does not support the video resolution you set,the SDK automatically adjusts the video resolution to a value that is closest to your settings for capture, encoding or streaming, with the same aspect ratio as the resolution you set. You can get the actual resolution of the video streams through the onDirectCdnStreamingStats callback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • config: VideoEncoderConfiguration

      Video profile. See VideoEncoderConfiguration. During CDN live streaming, Agora only supports setting OrientationMode as OrientationFixedLandscape or OrientationFixedPortrait.

    Returns number

  • Sets dual-stream mode configuration on the sender side.

    The SDK defaults to enabling low-quality video stream adaptive mode (AutoSimulcastStream) on the sender side, which means the sender does not actively send low-quality video stream. The receiving end with the role of the host can initiate a low-quality video stream request by calling setRemoteVideoStreamType, and upon receiving the request, the sending end automatically starts sending low-quality stream. If you want to modify this behavior, you can call this method and set mode to DisableSimulcastStream (never send low-quality video streams) or EnableSimulcastStream (always send low-quality video streams). If you want to restore the default behavior after making changes, you can call this method again with mode set to AutoSimulcastStream. The difference and connection between this method and enableDualStreamMode is as follows: When calling this method and setting mode to DisableSimulcastStream, it has the same effect as calling enableDualStreamMode and setting enabled to false. When calling this method and setting mode to EnableSimulcastStream, it has the same effect as calling enableDualStreamMode and setting enabled to true. Both methods can be called before and after joining a channel. If both methods are used, the settings in the method called later takes precedence.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • mode: SimulcastStreamMode

      The mode in which the video stream is sent. See SimulcastStreamMode.

    • Optional streamConfig: SimulcastStreamConfig

      The configuration of the low-quality video stream. See SimulcastStreamConfig. When setting mode to DisableSimulcastStream, setting streamConfig will not take effect.

    Returns number

  • Sets the format of the in-ear monitoring raw audio data.

    This method is used to set the in-ear monitoring audio data format reported by the onEarMonitoringAudioFrame callback. Before calling this method, you need to call enableInEarMonitoring, and set includeAudioFilters to EarMonitoringFilterBuiltInAudioFilters or EarMonitoringFilterNoiseSuppression. The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method. Sample interval (sec) = samplePerCall /(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onEarMonitoringAudioFrame callback according to the sampling interval.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sampleRate: number

      The sample rate of the audio data reported in the onEarMonitoringAudioFrame callback, which can be set as 8,000, 16,000, 32,000, 44,100, or 48,000 Hz.

    • channel: number

      The number of audio channels reported in the onEarMonitoringAudioFrame callback. 1: Mono. 2: Stereo.

    • mode: RawAudioFrameOpModeType

      The use mode of the audio frame. See RawAudioFrameOpModeType.

    • samplesPerCall: number

      The number of data samples reported in the onEarMonitoringAudioFrame callback, such as 1,024 for the Media Push.

    Returns number

  • Sets the playback position of an audio effect file.

    After a successful setting, the local audio effect file starts playing at the specified position. Call this method after playEffect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The audio effect ID. The ID of each audio effect file is unique.

    • pos: number

      The playback position (ms) of the audio effect file.

    Returns number

  • Sets the volume of the audio effects.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • volume: number

      The playback volume. The value range is [0, 100]. The default value is 100, which represents the original volume.

    Returns number

  • Enables/Disables the audio route to the speakerphone.

    For the default audio route in different scenarios, see.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • speakerOn: boolean

      Sets whether to enable the speakerphone or earpiece: true : Enable device state monitoring. The audio route is the speakerphone. false : Disable device state monitoring. The audio route is the earpiece.

    Returns number

  • Sets the properties of the extension.

    After enabling the extension, you can call this method to set the properties of the extension.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • provider: string

      The name of the extension provider.

    • extension: string

      The name of the extension.

    • key: string

      The key of the extension.

    • value: string

      The value of the extension key.

    • Optional type: MediaSourceType

      Source type of the extension. See MediaSourceType.

    Returns number

  • Sets the properties of the extension provider.

    You can call this method to set the attributes of the extension provider and initialize the relevant parameters according to the type of the provider.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • provider: string

      The name of the extension provider.

    • key: string

      The key of the extension.

    • value: string

      The value of the extension key.

    Returns number

  • Sets the low- and high-frequency parameters of the headphone equalizer.

    In a spatial audio effect scenario, if the preset headphone equalization effect is not achieved after calling the setHeadphoneEQPreset method, you can further adjust the headphone equalization effect by calling this method.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason).

    Parameters

    • lowGain: number

      The low-frequency parameters of the headphone equalizer. The value range is [-10,10]. The larger the value, the deeper the sound.

    • highGain: number

      The high-frequency parameters of the headphone equalizer. The value range is [-10,10]. The larger the value, the sharper the sound.

    Returns number

  • Sets the preset headphone equalization effect.

    This method is mainly used in spatial audio effect scenarios. You can select the preset headphone equalizer to listen to the audio to achieve the expected audio experience. If the headphones you use already have a good equalization effect, you may not get a significant improvement when you call this method, and could even diminish the experience.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason).

    Parameters

    Returns number

  • Sets the volume of the in-ear monitor.

    Returns

    0: Success. < 0: Failure. -2: Invalid parameter settings, such as in-ear monitoring volume exceeding the valid range (< 0 or > 400).

    Parameters

    • volume: number

      The volume of the user. The value range is [0,400]. 0: Mute. 100: (Default) The original volume. 400: Four times the original volume (amplifying the audio signals by four times).

    Returns number

  • Updates the display mode of the local video view.

    After initializing the local video view, you can call this method to update its rendering and mirror modes. It affects only the video view that the local user sees and does not impact the publishing of the local video.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • renderMode: RenderModeType

      The local video display mode. See RenderModeType.

    • Optional mirrorMode: VideoMirrorModeType

      The mirror mode of the local video view. See VideoMirrorModeType. If you use a front camera, the SDK enables the mirror mode by default; if you use a rear camera, the SDK disables the mirror mode by default.

    Returns number

  • Sets the local video mirror mode.

    Deprecated: This method is deprecated. Use setLocalRenderMode instead.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets the local voice equalization effect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • bandFrequency: AudioEqualizationBandFrequency

      The band frequency. The value ranges between 0 and 9; representing the respective 10-band center frequencies of the voice effects, including 31, 62, 125, 250, 500, 1k, 2k, 4k, 8k, and 16k Hz. See AudioEqualizationBandFrequency.

    • bandGain: number

      The gain of each band in dB. The value ranges between -15 and 15. The default value is 0.

    Returns number

  • Set the formant ratio to change the timbre of human voice.

    Formant ratio affects the timbre of voice. The smaller the value, the deeper the sound will be, and the larger, the sharper. After you set the formant ratio, all users in the channel can hear the changed voice. If you want to change the timbre and pitch of voice at the same time, Agora recommends using this method together with setLocalVoicePitch.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • formantRatio: number

      The formant ratio. The value range is [-1.0, 1.0]. The default value is 0.0, which means do not change the timbre of the voice. Agora recommends setting this value within the range of [-0.4, 0.6]. Otherwise, the voice may be seriously distorted.

    Returns number

  • Changes the voice pitch of the local speaker.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • pitch: number

      The local voice pitch. The value range is [0.5,2.0]. The lower the value, the lower the pitch. The default value is 1.0 (no change to the pitch).

    Returns number

  • Sets the local voice reverberation.

    The SDK provides an easier-to-use method, setAudioEffectPreset, to directly implement preset reverb effects for such as pop, R&B, and KTV. You can call this method either before or after joining a channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • reverbKey: AudioReverbType

      The reverberation key. Agora provides five reverberation keys, see AudioReverbType.

    • value: number

      The value of the reverberation key.

    Returns number

  • Sets the log file.

    Deprecated: This method is deprecated. Set the log file path by configuring the context parameter when calling initialize. Specifies an SDK output log file. The log file records all log data for the SDK’s operation.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • filePath: string

      The complete path of the log files. These log files are encoded in UTF-8.

    Returns number

  • Sets the log file size.

    Deprecated: Use the logConfig parameter in initialize instead. By default, the SDK generates five SDK log files and five API call log files with the following rules: The SDK log files are: agorasdk.log, agorasdk.1.log, agorasdk.2.log, agorasdk.3.log, and agorasdk.4.log. The API call log files are: agoraapi.log, agoraapi.1.log, agoraapi.2.log, agoraapi.3.log, and agoraapi.4.log. The default size of each SDK log file and API log file is 2,048 KB. These log files are encoded in UTF-8. The SDK writes the latest logs in agorasdk.log or agoraapi.log. When agorasdk.log is full, the SDK processes the log files in the following order: Delete the agorasdk.4.log file (if any). Rename agorasdk.3.log to agorasdk.4.log. Rename agorasdk.2.log to agorasdk.3.log. Rename agorasdk.1.log to agorasdk.2.log. Create a new agorasdk.log file. The overwrite rules for the agoraapi.log file are the same as for agorasdk.log. This method is used to set the size of the agorasdk.log file only and does not effect the agoraapi.log file.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • fileSizeInKBytes: number

      The size (KB) of an agorasdk.log file. The value range is [128,20480]. The default value is 2,048 KB. If you set fileSizeInKByte smaller than 128 KB, the SDK automatically adjusts it to 128 KB; if you set fileSizeInKByte greater than 20,480 KB, the SDK automatically adjusts it to 20,480 KB.

    Returns number

  • Sets the log output level of the SDK.

    Deprecated: Use logConfig in initialize instead. This method sets the output log level of the SDK. You can use one or a combination of the log filter levels. The log level follows the sequence of LogFilterOff, LogFilterCritical, LogFilterError, LogFilterWarn, LogFilterInfo, and LogFilterDebug. Choose a level to see the logs preceding that level. If, for example, you set the log level to LogFilterWarn, you see the logs within levels LogFilterCritical, LogFilterError and LogFilterWarn.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • filter: LogFilterType

      The output log level of the SDK. See LogFilterType.

    Returns number

  • Sets the output log level of the SDK.

    Deprecated: This method is deprecated. Set the log file level by configuring the context parameter when calling initialize. Choose a level to see the logs preceding that level.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • level: LogLevel

      The log level. See LogLevel.

    Returns number

  • Sets low-light enhancement.

    The low-light enhancement feature can adaptively adjust the brightness value of the video captured in situations with low or uneven lighting, such as backlit, cloudy, or dark scenes. It restores or highlights the image details and improves the overall visual effect of the video. You can call this method to enable the color enhancement feature and set the options of the color enhancement effect. Call this method after calling enableVideo. Dark light enhancement has certain requirements for equipment performance. The low-light enhancement feature has certain performance requirements on devices. If your device overheats after you enable low-light enhancement, Agora recommends modifying the low-light enhancement options to a less performance-consuming level or disabling low-light enhancement entirely. Both this method and setExtensionProperty can turn on low-light enhancement: When you use the SDK to capture video, Agora recommends this method (this method only works for video captured by the SDK). When you use an external video source to implement custom video capture, or send an external video source to the SDK, Agora recommends using setExtensionProperty. This method relies on the image enhancement dynamic library libagora_clear_vision_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable low-light enhancement: true : Enable low-light enhancement. false : (Default) Disable low-light enhancement.

    • options: LowlightEnhanceOptions

      The low-light enhancement options. See LowlightEnhanceOptions.

    • Optional type: MediaSourceType

      The type of the video source. See MediaSourceType.

    Returns number

  • Sets the maximum size of the media metadata.

    After calling registerMediaMetadataObserver, you can call this method to set the maximum size of the media metadata.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • size: number

      The maximum size of media metadata.

    Returns number

  • Set the format of the raw audio data after mixing for audio capture and playback.

    The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method. Sample interval (sec) = samplePerCall /(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onMixedAudioFrame callback according to the sampling interval.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sampleRate: number

      The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz.

    • channel: number

      The number of audio channels. You can set the value as 1 or 2. 1: Mono. 2: Stereo.

    • samplesPerCall: number

      The number of data samples, such as 1024 for the Media Push.

    Returns number

  • Provides technical preview functionalities or special customizations by configuring the SDK with JSON options.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • parameters: string

      Pointer to the set parameters in a JSON string.

    Returns number

  • Sets the format of the raw audio playback data before mixing.

    The SDK triggers the onPlaybackAudioFrameBeforeMixing callback according to the sampling interval.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sampleRate: number

      The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz.

    • channel: number

      The number of audio channels. You can set the value as 1 or 2. 1: Mono. 2: Stereo.

    Returns number

  • Sets the format of the raw audio playback data.

    The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method. Sample interval (sec) = samplePerCall /(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onPlaybackAudioFrame callback according to the sampling interval.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sampleRate: number

      The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz.

    • channel: number

      The number of audio channels. You can set the value as 1 or 2. 1: Mono. 2: Stereo.

    • mode: RawAudioFrameOpModeType

      The use mode of the audio frame. See RawAudioFrameOpModeType.

    • samplesPerCall: number

      The number of data samples, such as 1024 for the Media Push.

    Returns number

  • Sets the format of the captured raw audio data.

    The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method. Sample interval (sec) = samplePerCall /(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the onRecordAudioFrame callback according to the sampling interval.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sampleRate: number

      The sample rate returned in the callback, which can be set as 8000, 16000, 32000, 44100, or 48000 Hz.

    • channel: number

      The number of audio channels. You can set the value as 1 or 2. 1: Mono. 2: Stereo.

    • mode: RawAudioFrameOpModeType

      The use mode of the audio frame. See RawAudioFrameOpModeType.

    • samplesPerCall: number

      The number of data samples, such as 1024 for the Media Push.

    Returns number

  • Sets the default video stream type to subscribe to.

    The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream. Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode, the scenarios for the receiver calling this method are as follows: The SDK enables low-quality video stream adaptive mode (AutoSimulcastStream) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the host can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode. If the sender calls setDualStreamMode and sets mode to DisableSimulcastStream (never send low-quality video stream), then calling this method will have no effect. If the sender calls setDualStreamMode and sets mode to EnableSimulcastStream (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • streamType: VideoStreamType

      The default video-stream type. See VideoStreamType.

    Returns number

  • Updates the display mode of the video view of a remote user.

    After initializing the video view of a remote user, you can call this method to update its rendering and mirror modes. This method affects only the video view that the local user sees. During a call, you can call this method as many times as necessary to update the display mode of the video view of a remote user.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the remote user.

    • renderMode: RenderModeType

      The rendering mode of the remote user view.

    • mirrorMode: VideoMirrorModeType

      The mirror mode of the remote user view. See VideoMirrorModeType.

    Returns number

  • Sets the fallback option for the subscribed video stream based on the network conditions.

    An unstable network affects the audio and video quality in a video call or interactive live video streaming. If option is set as StreamFallbackOptionVideoStreamLow or StreamFallbackOptionAudioOnly, the SDK automatically switches the video from a high-quality stream to a low-quality stream or disables the video when the downlink network conditions cannot support both audio and video to guarantee the quality of the audio. Meanwhile, the SDK continuously monitors network quality and resumes subscribing to audio and video streams when the network quality improves. When the subscribed video stream falls back to an audio-only stream, or recovers from an audio-only stream to an audio-video stream, the SDK triggers the onRemoteSubscribeFallbackToAudioOnly callback.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • option: StreamFallbackOptions

      Fallback options for the subscribed stream. See STREAM_FALLBACK_OPTIONS.

    Returns number

  • Sets the spatial audio effect parameters of the remote user.

    Call this method after enableSpatialAudio. After successfully setting the spatial audio effect parameters of the remote user, the local user can hear the remote user with a sense of space.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets the video stream type to subscribe to.

    Depending on the default behavior of the sender and the specific settings when calling setDualStreamMode, the scenarios for the receiver calling this method are as follows: The SDK enables low-quality video stream adaptive mode (AutoSimulcastStream) on the sender side by default, meaning only the high-quality video stream is transmitted. Only the receiver with the role of the host can call this method to initiate a low-quality video stream request. Once the sender receives the request, it starts automatically sending the low-quality video stream. At this point, all users in the channel can call this method to switch to low-quality video stream subscription mode. If the sender calls setDualStreamMode and sets mode to DisableSimulcastStream (never send low-quality video stream), then calling this method will have no effect. If the sender calls setDualStreamMode and sets mode to EnableSimulcastStream (always send low-quality video stream), both the host and audience receivers can call this method to switch to low-quality video stream subscription mode. The SDK will dynamically adjust the size of the corresponding video stream based on the size of the video window to save bandwidth and computing resources. The default aspect ratio of the low-quality video stream is the same as that of the high-quality video stream. According to the current aspect ratio of the high-quality video stream, the system will automatically allocate the resolution, frame rate, and bitrate of the low-quality video stream. You can call this method either before or after joining a channel. If you call both this method and setRemoteDefaultVideoStreamType, the setting of this method takes effect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID.

    • streamType: VideoStreamType

      The video stream type, see VideoStreamType.

    Returns number

  • Options for subscribing to remote video streams.

    When a remote user has enabled dual-stream mode, you can call this method to choose the option for subscribing to the video streams sent by the remote user. The default subscription behavior of the SDK for remote video streams depends on the type of registered video observer: If the IVideoFrameObserver observer is registered, the default is to subscribe to both raw data and encoded data. If the IVideoEncodedFrameObserver observer is registered, the default is to subscribe only to the encoded data. If both types of observers are registered, the default behavior follows the last registered video observer. For example, if the last registered observer is the IVideoFrameObserver observer, the default is to subscribe to both raw data and encoded data. If you want to modify the default behavior, or set different subscription options for different uids, you can call this method to set it.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the remote user.

    • options: VideoSubscriptionOptions

      The video subscription options. See VideoSubscriptionOptions.

    Returns number

  • Sets the 2D position (the position on the horizontal plane) of the remote user's voice.

    This method sets the 2D position and volume of a remote user, so that the local user can easily hear and identify the remote user's position. When the local user calls this method to set the voice position of a remote user, the voice difference between the left and right channels allows the local user to track the real-time position of the remote user, creating a sense of space. This method applies to massive multiplayer online games, such as Battle Royale games. For this method to work, enable stereo panning for remote users by calling the enableSoundPositionIndication method before joining a channel. For the best voice positioning, Agora recommends using a wired headset. Call this method after joining a channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID of the remote user.

    • pan: number

      The voice position of the remote user. The value ranges from -1.0 to 1.0: 0.0: (Default) The remote voice comes from the front. -1.0: The remote voice comes from the left. 1.0: The remote voice comes from the right.

    • gain: number

      The volume of the remote user. The value ranges from 0.0 to 100.0. The default value is 100.0 (the original volume of the remote user). The smaller the value, the lower the volume.

    Returns number

  • Selects the audio playback route in communication audio mode.

    This method is used to switch the audio route from Bluetooth headphones to earpiece, wired headphones or speakers in communication audio mode (). This method is for Android only.

    Returns

    Without practical meaning.

    Parameters

    • route: number

      The audio playback route you want to use: -1: The default audio route. 0: Headphones with microphone. 1: Handset. 2: Headphones without microphone. 3: Device's built-in speaker. 4: (Not supported yet) External speakers. 5: Bluetooth headphones. 6: USB device.

    Returns number

  • Sets the content hint for screen sharing.

    A content hint suggests the type of the content being shared, so that the SDK applies different optimization algorithms to different types of content. If you don't call this method, the default content hint is ContentHintNone. You can call this method either before or after you start screen sharing.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -8: The screen sharing state is invalid. Probably because you have shared other screens or windows. Try calling stopScreenCapture to stop the current sharing and start sharing the screen again.

    Parameters

    • contentHint: VideoContentHint

      The content hint for screen sharing. See VideoContentHint.

    Returns number

  • Sets the screen sharing scenario.

    When you start screen sharing or window sharing, you can call this method to set the screen sharing scenario. The SDK adjusts the video quality and experience of the sharing according to the scenario. Agora recommends that you call this method before joining a channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets the allowlist of subscriptions for audio streams.

    You can call this method to specify the audio streams of a user that you want to subscribe to. If a user is added in the allowlist and blocklist at the same time, only the blocklist takes effect. You can call this method either before or after joining a channel. The allowlist is not affected by the setting in muteRemoteAudioStream, muteAllRemoteAudioStreams and autoSubscribeAudio in ChannelMediaOptions. Once the allowlist of subscriptions is set, it is effective even if you leave the current channel and rejoin the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uidList: number[]

      The user ID list of users that you want to subscribe to. If you want to specify the audio streams of a user for subscription, add the user ID in this list. If you want to remove a user from the allowlist, you need to call the setSubscribeAudioAllowlist method to update the user ID list; this means you only add the uid of users that you want to subscribe to in the new user ID list.

    • uidNumber: number

      The number of users in the user ID list.

    Returns number

  • Set the blocklist of subscriptions for audio streams.

    You can call this method to specify the audio streams of a user that you do not want to subscribe to. You can call this method either before or after joining a channel. The blocklist is not affected by the setting in muteRemoteAudioStream, muteAllRemoteAudioStreams, and autoSubscribeAudio in ChannelMediaOptions. Once the blocklist of subscriptions is set, it is effective even if you leave the current channel and rejoin the channel. If a user is added in the allowlist and blocklist at the same time, only the blocklist takes effect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uidList: number[]

      The user ID list of users that you do not want to subscribe to. If you want to specify the audio streams of a user that you do not want to subscribe to, add the user ID in this list. If you want to remove a user from the blocklist, you need to call the setSubscribeAudioBlocklist method to update the user ID list; this means you only add the uid of users that you do not want to subscribe to in the new user ID list.

    • uidNumber: number

      The number of users in the user ID list.

    Returns number

  • Set the allowlist of subscriptions for video streams.

    You can call this method to specify the video streams of a user that you want to subscribe to. If a user is added in the allowlist and blocklist at the same time, only the blocklist takes effect. Once the allowlist of subscriptions is set, it is effective even if you leave the current channel and rejoin the channel. You can call this method either before or after joining a channel. The allowlist is not affected by the setting in muteRemoteVideoStream, muteAllRemoteVideoStreams and autoSubscribeAudio in ChannelMediaOptions.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uidList: number[]

      The user ID list of users that you want to subscribe to. If you want to specify the video streams of a user for subscription, add the user ID of that user in this list. If you want to remove a user from the allowlist, you need to call the setSubscribeVideoAllowlist method to update the user ID list; this means you only add the uid of users that you want to subscribe to in the new user ID list.

    • uidNumber: number

      The number of users in the user ID list.

    Returns number

  • Set the blocklist of subscriptions for video streams.

    You can call this method to specify the video streams of a user that you do not want to subscribe to. If a user is added in the allowlist and blocklist at the same time, only the blocklist takes effect. Once the blocklist of subscriptions is set, it is effective even if you leave the current channel and rejoin the channel. You can call this method either before or after joining a channel. The blocklist is not affected by the setting in muteRemoteVideoStream, muteAllRemoteVideoStreams and autoSubscribeAudio in ChannelMediaOptions.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uidList: number[]

      The user ID list of users that you do not want to subscribe to. If you want to specify the video streams of a user that you do not want to subscribe to, add the user ID of that user in this list. If you want to remove a user from the blocklist, you need to call the setSubscribeVideoBlocklist method to update the user ID list; this means you only add the uid of users that you do not want to subscribe to in the new user ID list.

    • uidNumber: number

      The number of users in the user ID list.

    Returns number

  • Sets video noise reduction.

    Underlit environments and low-end video capture devices can cause video images to contain significant noise, which affects video quality. In real-time interactive scenarios, video noise also consumes bitstream resources and reduces encoding efficiency during encoding. You can call this method to enable the video noise reduction feature and set the options of the video noise reduction effect. Call this method after calling enableVideo. Video noise reduction has certain requirements for equipment performance. If your device overheats after you enable video noise reduction, Agora recommends modifying the video noise reduction options to a less performance-consuming level or disabling video noise reduction entirely. Both this method and setExtensionProperty can turn on video noise reduction function: When you use the SDK to capture video, Agora recommends this method (this method only works for video captured by the SDK). When you use an external video source to implement custom video capture, or send an external video source to the SDK, Agora recommends using setExtensionProperty. This method relies on the image enhancement dynamic library libagora_clear_vision_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • enabled: boolean

      Whether to enable video noise reduction: true : Enable video noise reduction. false : (Default) Disable video noise reduction.

    • options: VideoDenoiserOptions

      The video noise reduction options. See VideoDenoiserOptions.

    • Optional type: MediaSourceType

      The type of the video source. See MediaSourceType.

    Returns number

  • Sets the video encoder configuration.

    Sets the encoder configuration for the local video. Each configuration profile corresponds to a set of video parameters, including the resolution, frame rate, and bitrate.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets video application scenarios.

    After successfully calling this method, the SDK will automatically enable the best practice strategies and adjust key performance metrics based on the specified scenario, to optimize the video experience. Call this method before joining a channel.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -4: Video application scenarios are not supported. Possible reasons include that you use the Voice SDK instead of the Video SDK. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method.

    Parameters

    • scenarioType: VideoApplicationScenarioType

      The type of video application scenario. See VideoApplicationScenarioType. ApplicationScenarioMeeting (1) is suitable for meeting scenarios. The SDK automatically enables the following strategies: In meeting scenarios where low-quality video streams are required to have a high bitrate, the SDK automatically enables multiple technologies used to deal with network congestions, to enhance the performance of the low-quality streams and to ensure the smooth reception by subscribers. The SDK monitors the number of subscribers to the high-quality video stream in real time and dynamically adjusts its configuration based on the number of subscribers. If nobody subscribers to the high-quality stream, the SDK automatically reduces its bitrate and frame rate to save upstream bandwidth. If someone subscribes to the high-quality stream, the SDK resets the high-quality stream to the VideoEncoderConfiguration configuration used in the most recent calling of setVideoEncoderConfiguration. If no configuration has been set by the user previously, the following values are used: Resolution: 960 × 540 Frame rate: 15 fps Bitrate: 1000 Kbps The SDK monitors the number of subscribers to the low-quality video stream in real time and dynamically enables or disables it based on the number of subscribers. If the user has called setDualStreamMode to set that never send low-quality video stream (DisableSimulcastStream), the dynamic adjustment of the low-quality stream in meeting scenarios will not take effect. If nobody subscribes to the low-quality stream, the SDK automatically disables it to save upstream bandwidth. If someone subscribes to the low-quality stream, the SDK enables the low-quality stream and resets it to the SimulcastStreamConfig configuration used in the most recent calling of setDualStreamMode. If no configuration has been set by the user previously, the following values are used: Resolution: 480 × 272 Frame rate: 15 fps Bitrate: 500 Kbps ApplicationScenario1v1 (2) is suitable for 1v1 video call scenarios. To meet the requirements for low latency and high-quality video in this scenario, the SDK optimizes its strategies, improving performance in terms of video quality, first frame rendering, latency on mid-to-low-end devices, and smoothness under weak network conditions.

    Returns number

  • Sets parameters for the preset voice beautifier effects.

    To achieve better vocal effects, it is recommended that you call the following APIs before calling this method: Call setAudioScenario to set the audio scenario to high-quality audio scenario, namely AudioScenarioGameStreaming (3). Call setAudioProfile to set the profile parameter to AudioProfileMusicHighQuality (4) or AudioProfileMusicHighQualityStereo (5). Call this method to set a gender characteristic and a reverberation effect for the singing beautifier effect. This method sets parameters for the local user who sends an audio stream. After setting the audio parameters, all users in the channel can hear the effect. Do not set the profile parameter in setAudioProfile to AudioProfileSpeechStandard (1) or AudioProfileIot (6), or the method does not take effect. You can call this method either before or after joining a channel. This method has the best effect on human voice processing, and Agora does not recommend calling this method to process audio data containing music. After calling setVoiceBeautifierParameters, Agora does not recommend calling the following methods, otherwise the effect set by setVoiceBeautifierParameters will be overwritten: setAudioEffectPreset setAudioEffectParameters setVoiceBeautifierPreset setLocalVoicePitch setLocalVoiceEqualization setLocalVoiceReverb setVoiceConversionPreset This method relies on the voice beautifier dynamic library libagora_audio_beauty_extension.dll. If the dynamic library is deleted, the function cannot be enabled normally.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • preset: VoiceBeautifierPreset

      The option for the preset audio effect: SINGING_BEAUTIFIER : The singing beautifier effect.

    • param1: number

      The gender characteristics options for the singing voice: 1 : A male-sounding voice. 2 : A female-sounding voice.

    • param2: number

      The reverberation effect options for the singing voice: 1 : The reverberation effect sounds like singing in a small room. 2 : The reverberation effect sounds like singing in a large room. 3 : The reverberation effect sounds like singing in a hall.

    Returns number

  • Sets a preset voice beautifier effect.

    Call this method to set a preset voice beautifier effect for the local user who sends an audio stream. After setting a voice beautifier effect, all users in the channel can hear the effect. You can set different voice beautifier effects for different scenarios.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Sets a preset voice beautifier effect.

    Call this method to set a preset voice changing effect for the local user who publishes an audio stream in a channel. After setting the voice changing effect, all users in the channel can hear the effect. You can set different voice changing effects for the user depending on different scenarios.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • preset: VoiceConversionPreset

      The options for the preset voice beautifier effects: VoiceConversionPreset.

    Returns number

  • Gets the volume of a specified audio effect file.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The ID of the audio effect. The ID of each audio effect file is unique.

    • volume: number

      The playback volume. The value range is [0, 100]. The default value is 100, which represents the original volume.

    Returns number

  • Starts playing the music file.

    For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support. If the local music file does not exist, the SDK does not support the file format, or the the SDK cannot access the music file URL, the SDK reports AudioMixingReasonCanNotOpen.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid. -3: The SDK is not ready. The audio module is disabled. The program is not complete. The initialization of IRtcEngine fails. Reinitialize the IRtcEngine.

    Parameters

    • filePath: string

      File path: Android: The file path, which needs to be accurate to the file name and suffix. Agora supports URL addresses, absolute paths, or file paths that start with /assets/. You might encounter permission issues if you use an absolute path to access a local file, so Agora recommends using a URI address instead. For example : content://com.android.providers.media.documents/document/audio%3A14441 iOS: The absolute path or URL address (including the suffixes of the filename) of the audio effect file. For example: /var/mobile/Containers/Data/audio.mp4.

    • loopback: boolean

      Whether to only play music files on the local client: true : Only play music files on the local client so that only the local user can hear the music. false : Publish music files to remote clients so that both the local user and remote users can hear the music.

    • cycle: number

      The number of times the music file plays.

      0: The number of times for playback. For example, 1 represents playing 1 time. -1: Play the audio file in an infinite loop.

    • Optional startPos: number

      The playback position (ms) of the music file.

    Returns number

  • Starts audio recording on the client and sets recording configurations.

    The Agora SDK allows recording during a call. After successfully calling this method, you can record the audio of users in the channel and get an audio recording file. Supported formats of audio files are as follows: WAV: High-fidelity files with typically larger file sizes. For example, if the sample rate is 32,000 Hz, the file size for 10-minute recording is approximately 73 MB. AAC: Low-fidelity files with typically smaller file sizes. For example, if the sample rate is 32,000 Hz and the recording quality is AudioRecordingQualityMedium, the file size for 10-minute recording is approximately 2 MB. Once the user leaves the channel, the recording automatically stops.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Starts camera capture.

    You can call this method to start capturing video from one or more cameras by specifying sourceType. On the iOS platform, if you want to enable multi-camera capture, you need to call enableMultiCamera and set enabled to true before calling this method.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sourceType: VideoSourceType

      The type of the video source. See VideoSourceType. On iOS devices, you can capture video from up to 2 cameras, provided the device has multiple cameras or supports external cameras. On Android devices, you can capture video from up to 4 cameras, provided the device has multiple cameras or supports external cameras.

    • config: CameraCapturerConfiguration

      The configuration of the video capture. See CameraCapturerConfiguration. On the iOS platform, this parameter has no practical function. Use the config parameter in enableMultiCamera instead to set the video capture configuration.

    Returns number

  • Starts pushing media streams to the CDN directly.

    Aogra does not support pushing media streams to one URL repeatedly. Media options Agora does not support setting the value of publishCameraTrack and publishCustomVideoTrack as true, or the value of publishMicrophoneTrack and publishCustomAudioTrack as true at the same time. When choosing media setting options (DirectCdnStreamingMediaOptions), you can refer to the following examples: If you want to push audio and video streams captured by the host from a custom source, the media setting options should be set as follows: publishCustomAudioTrack is set as true and call the pushAudioFrame method publishCustomVideoTrack is set as true and call the pushVideoFrame method publishCameraTrack is set as false (the default value) publishMicrophoneTrack is set as false (the default value) As of v4.2.0, Agora SDK supports audio-only live streaming. You can set publishCustomAudioTrack or publishMicrophoneTrack in DirectCdnStreamingMediaOptions as true and call pushAudioFrame to push audio streams. Agora only supports pushing one audio and video streams or one audio streams to CDN.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Starts an audio device loopback test.

    To test whether the user's local sending and receiving streams are normal, you can call this method to perform an audio and video call loop test, which tests whether the audio and video devices and the user's upstream and downstream networks are working properly. After starting the test, the user needs to make a sound or face the camera. The audio or video is output after about two seconds. If the audio playback is normal, the audio device and the user's upstream and downstream networks are working properly; if the video playback is normal, the video device and the user's upstream and downstream networks are working properly.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • config: EchoTestConfiguration

      The configuration of the audio and video call loop test. See EchoTestConfiguration.

    Returns number

  • Starts the last mile network probe test.

    This method starts the last-mile network probe test before joining a channel to get the uplink and downlink last mile network statistics, including the bandwidth, packet loss, jitter, and round-trip time (RTT).

    Returns

    0: Success. < 0: Failure.

    Parameters

    • config: LastmileProbeConfig

      The configurations of the last-mile network probe test. See LastmileProbeConfig.

    Returns number

  • Starts the local video mixing.

    After calling this method, you can merge multiple video streams into one video stream locally. For example, you can merge the video streams captured by the camera, screen sharing, media player, remote video, video files, images, etc. into one video stream, and then publish the mixed video stream to the channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • config: LocalTranscoderConfiguration

      Configuration of the local video mixing, see LocalTranscoderConfiguration. The maximum resolution of each video stream participating in the local video mixing is 4096 × 2160. If this limit is exceeded, video mixing does not take effect. The maximum resolution of the mixed video stream is 4096 × 2160.

    Returns number

  • Enables tracing the video frame rendering process.

    The SDK starts tracing the rendering status of the video frames in the channel from the moment this method is successfully called and reports information about the event through the onVideoRenderingTracingResult callback. By default, the SDK starts tracing the video rendering event automatically when the local user successfully joins the channel. You can call this method at an appropriate time according to the actual application scenario to customize the tracing process. After the local user leaves the current channel, the SDK automatically resets the time point to the next time when the user successfully joins the channel.

    Returns

    0: Success. < 0: Failure. -7: The method is called before IRtcEngine is initialized.

    Returns number

  • Starts relaying media streams across channels or updates channels for media relay.

    The first successful call to this method starts relaying media streams from the source channel to the destination channels. To relay the media stream to other channels, or exit one of the current media relays, you can call this method again to update the destination channels. This feature supports relaying media streams to a maximum of six destination channels. After a successful method call, the SDK triggers the onChannelMediaRelayStateChanged callback, and this callback returns the state of the media stream relay. Common states are as follows: If the onChannelMediaRelayStateChanged callback returns RelayStateRunning (2) and RelayOk (0), it means that the SDK starts relaying media streams from the source channel to the destination channel. If the onChannelMediaRelayStateChanged callback returns RelayStateFailure (3), an exception occurs during the media stream relay. Call this method after joining the channel. This method takes effect only when you are a host in a live streaming channel. The relaying media streams across channels function needs to be enabled by contacting. Agora does not support string user accounts in this API.

    Returns

    0: Success. < 0: Failure. -1: A general error occurs (no specified reason). -2: The parameter is invalid. -8: Internal state error. Probably because the user is not a broadcaster.

    Parameters

    Returns number

  • Enables the local video preview and specifies the video source for the preview.

    This method is used to start local video preview and specify the video source that appears in the preview screen.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • Optional sourceType: VideoSourceType

      The type of the video source. See VideoSourceType.

    Returns number

  • Enables the local video preview.

    You can call this method to enable local video preview.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Enables the virtual metronome.

    After enabling the virtual metronome, the SDK plays the specified audio effect file from the beginning, and controls the playback duration of each file according to beatsPerMinute you set in AgoraRhythmPlayerConfig. For example, if you set beatsPerMinute as 60, the SDK plays one beat every second. If the file duration exceeds the beat duration, the SDK only plays the audio within the beat duration. By default, the sound of the virtual metronome is published in the channel. If you want the sound to be heard by the remote users, you can set publishRhythmPlayerTrack in ChannelMediaOptions as true.

    Returns

    0: Success. < 0: Failure. -22: Cannot find audio effect files. Please set the correct paths for sound1 and sound2.

    Parameters

    • sound1: string

      The absolute path or URL address (including the filename extensions) of the file for the downbeat. For example, C:\music\audio.mp4. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support.

    • sound2: string

      The absolute path or URL address (including the filename extensions) of the file for the upbeats. For example, C:\music\audio.mp4. For the audio file formats supported by this method, see What formats of audio files does the Agora RTC SDK support.

    • config: AgoraRhythmPlayerConfig

      The metronome configuration. See AgoraRhythmPlayerConfig.

    Returns number

  • Starts Media Push and sets the transcoding configuration.

    Agora recommends that you use the server-side Media Push function. You can call this method to push a live audio-and-video stream to the specified CDN address and set the transcoding configuration. This method can push media streams to only one CDN address at a time, so if you need to push streams to multiple addresses, call this method multiple times. Under one Agora project, the maximum number of concurrent tasks to push media streams is 200 by default. If you need a higher quota, contact. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming. Call this method after joining a channel. Only hosts in the LIVE_BROADCASTING profile can call this method. If you want to retry pushing streams after a failed push, make sure to call stopRtmpStream first, then call this method to retry pushing streams; otherwise, the SDK returns the same error code as the last failed push.

    Returns

    0: Success. < 0: Failure. -2: The URL or configuration of transcoding is invalid; check your URL and transcoding configurations. -7: The SDK is not initialized before calling this method. -19: The Media Push URL is already in use; use another URL instead.

    Parameters

    • url: string

      The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported.

    • transcoding: LiveTranscoding

      The transcoding configuration for Media Push. See LiveTranscoding.

    Returns number

  • Starts pushing media streams to a CDN without transcoding.

    Call this method after joining a channel. Only hosts in the LIVE_BROADCASTING profile can call this method. If you want to retry pushing streams after a failed push, make sure to call stopRtmpStream first, then call this method to retry pushing streams; otherwise, the SDK returns the same error code as the last failed push. Agora recommends that you use the server-side Media Push function. You can call this method to push an audio or video stream to the specified CDN address. This method can push media streams to only one CDN address at a time, so if you need to push streams to multiple addresses, call this method multiple times. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming.

    Returns

    0: Success. < 0: Failure. -2: The URL or configuration of transcoding is invalid; check your URL and transcoding configurations. -7: The SDK is not initialized before calling this method. -19: The Media Push URL is already in use; use another URL instead.

    Parameters

    • url: string

      The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported.

    Returns number

  • Starts screen capture.

    The billing for the screen sharing stream is based on the dimensions in ScreenVideoParameters : When you do not pass in a value, Agora bills you at 1280 × 720. When you pass in a value, Agora bills you at that value.

    Returns

    0: Success. < 0: Failure. -2 (iOS platform): Empty parameter. -2 (Android platform): The system version is too low. Ensure that the Android API level is not lower than 21. -3 (Android platform): Unable to capture system audio. Ensure that the Android API level is not lower than 29.

    Parameters

    • captureParams: ScreenCaptureParameters2

      The screen sharing encoding parameters. The default video dimension is 1920 x 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges. See ScreenCaptureParameters2.

    Returns number

  • Stops playing all audio effects.

    When you no longer need to play the audio effect, you can call this method to stop the playback. If you only need to pause the playback, call pauseAllEffects.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops playing the music file.

    After calling startAudioMixing to play a music file, you can call this method to stop the playing. If you only need to pause the playback, call pauseAudioMixing.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops the audio recording on the client.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops camera capture.

    After calling startCameraCapture to start capturing video through one or more cameras, you can call this method and set the sourceType parameter to stop the capture from the specified cameras. On the iOS platform, if you want to disable multi-camera capture, you need to call enableMultiCamera after calling this method and set enabled to false. If you are using the local video mixing function, calling this method can cause the local video mixing to be interrupted.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • sourceType: VideoSourceType

      The type of the video source. See VideoSourceType.

    Returns number

  • Stops the media stream relay. Once the relay stops, the host quits all the target channels.

    After a successful method call, the SDK triggers the onChannelMediaRelayStateChanged callback. If the callback reports RelayStateIdle (0) and RelayOk (0), the host successfully stops the relay. If the method call fails, the SDK triggers the onChannelMediaRelayStateChanged callback with the RelayErrorServerNoResponse (2) or RelayErrorServerConnectionLost (8) status code. You can call the leaveChannel method to leave the channel, and the media stream relay automatically stops.

    Returns

    0: Success. < 0: Failure. -5: The method call was rejected. There is no ongoing channel media relay.

    Returns number

  • Stops pushing media streams to the CDN directly.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops the audio call test.

    After calling startEchoTest, you must call this method to end the test; otherwise, the user cannot perform the next audio and video call loop test and cannot join the channel.

    Returns

    0: Success. < 0: Failure. -5(ERR_REFUSED): Failed to stop the echo test. The echo test may not be running.

    Returns number

  • Stops playing a specified audio effect.

    When you no longer need to play the audio effect, you can call this method to stop the playback. If you only need to pause the playback, call pauseEffect.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The ID of the audio effect. Each audio effect has a unique ID.

    Returns number

  • Stops the last mile network probe test.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops the local video mixing.

    After calling startLocalVideoTranscoder, call this method if you want to stop the local video mixing.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops the local video preview.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • Optional sourceType: VideoSourceType

      The type of the video source. See VideoSourceType.

    Returns number

  • Disables the virtual metronome.

    After calling startRhythmPlayer, you can call this method to disable the virtual metronome.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Stops pushing media streams to a CDN.

    Agora recommends that you use the server-side Media Push function. You can call this method to stop the live stream on the specified CDN address. This method can stop pushing media streams to only one CDN address at a time, so if you need to stop pushing streams to multiple addresses, call this method multiple times. After you call this method, the SDK triggers the onRtmpStreamingStateChanged callback on the local client to report the state of the streaming.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • url: string

      The address of Media Push. The format is RTMP or RTMPS. The character length cannot exceed 1024 bytes. Special characters such as Chinese characters are not supported.

    Returns number

  • Stops screen capture.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Switches between front and rear cameras.

    You can call this method to dynamically switch cameras based on the actual camera availability during the app's runtime, without having to restart the video stream or reconfigure the video source.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Takes a snapshot of a video stream.

    This method takes a snapshot of a video stream from the specified user, generates a JPG image, and saves it to the specified path.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • uid: number

      The user ID. Set uid as 0 if you want to take a snapshot of the local user's video.

    • filePath: string

      The local path (including filename extensions) of the snapshot. For example: iOS: /App Sandbox/Library/Caches/example.jpg Android: /storage/emulated/0/Android/data//files/example.jpg Ensure that the path you specify exists and is writable.

    Returns number

  • Releases a specified preloaded audio effect from the memory.

    Returns

    0: Success. < 0: Failure.

    Returns number

  • Releases a specified preloaded audio effect from the memory.

    After loading the audio effect file into memory using preloadEffect, if you need to release the audio effect file, call this method.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • soundId: number

      The ID of the audio effect. Each audio effect has a unique ID.

    Returns number

  • Unregisters the encoded audio frame observer.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Unregisters the audio spectrum observer.

    After calling registerAudioSpectrumObserver, if you want to disable audio spectrum monitoring, you can call this method. You can call this method either before or after joining a channel.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Removes the specified callback events.

    You can call this method too remove all added callback events.

    Returns

    true : Success. false : Failure.

    Parameters

    Returns boolean

  • Unregisters the specified metadata observer.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • observer: IMetadataObserver

      The metadata observer. See IMetadataObserver.

    • type: MetadataType

      The metadata type. The SDK currently only supports VideoMetadata. See MetadataType.

    Returns number

  • Updates the channel media options after joining the channel.

    Returns

    0: Success. < 0: Failure. -2: The value of a member in ChannelMediaOptions is invalid. For example, the token or the user ID is invalid. You need to fill in a valid parameter. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method. -8: The internal state of the IRtcEngine object is wrong. The possible reason is that the user is not in the channel. Agora recommends that you use the onConnectionStateChanged callback to see whether the user is in the channel. If you receive the ConnectionStateDisconnected (1) or ConnectionStateFailed (5) state, the user is not in the channel. You need to call joinChannel to join a channel before calling this method.

    Parameters

    Returns number

  • Updates the local video mixing configuration.

    After calling startLocalVideoTranscoder, call this method if you want to update the local video mixing configuration. If you want to update the video source type used for local video mixing, such as adding a second camera or screen to capture video, you need to call this method after startCameraCapture or startScreenCapture.

    Returns

    0: Success. < 0: Failure.

    Parameters

    Returns number

  • Updates the wildcard token for preloading channels.

    You need to maintain the life cycle of the wildcard token by yourself. When the token expires, you need to generate a new wildcard token and then call this method to pass in the new token.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. For example, the token is invalid. You need to pass in a valid parameter and join the channel again. -7: The IRtcEngine object has not been initialized. You need to initialize the IRtcEngine object before calling this method.

    Parameters

    • token: string

      The new token.

    Returns number

  • Updates the transcoding configuration.

    Agora recommends that you use the server-side Media Push function. After you start pushing media streams to CDN with transcoding, you can dynamically update the transcoding configuration according to the scenario. The SDK triggers the onTranscodingUpdated callback after the transcoding configuration is updated.

    Returns

    0: Success. < 0: Failure.

    Parameters

    • transcoding: LiveTranscoding

      The transcoding configuration for Media Push. See LiveTranscoding.

    Returns number

  • Updates the screen capturing parameters.

    If the system audio is not captured when screen sharing is enabled, and then you want to update the parameter configuration and publish the system audio, you can refer to the following steps: Call this method, and set captureAudio to true. Call updateChannelMediaOptions, and set publishScreenCaptureAudio to true to publish the audio captured by the screen. This method is for Android and iOS only. On the iOS platform, screen sharing is only available on iOS 12.0 and later.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -8: The screen sharing state is invalid. Probably because you have shared other screens or windows. Try calling stopScreenCapture to stop the current sharing and start sharing the screen again.

    Parameters

    • captureParams: ScreenCaptureParameters2

      The screen sharing encoding parameters. The default video resolution is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges. See ScreenCaptureParameters2.

    Returns number

  • Updates the screen capturing parameters.

    Call this method after starting screen sharing or window sharing.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -8: The screen sharing state is invalid. Probably because you have shared other screens or windows. Try calling stopScreenCapture to stop the current sharing and start sharing the screen again.

    Parameters

    • captureParams: ScreenCaptureParameters

      The screen sharing encoding parameters. The default video resolution is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges. The video properties of the screen sharing stream only need to be set through this parameter, and are unrelated to setVideoEncoderConfiguration.

    Returns number

  • Updates the screen capturing region.

    Call this method after starting screen sharing or window sharing.

    Returns

    0: Success. < 0: Failure. -2: The parameter is invalid. -8: The screen sharing state is invalid. Probably because you have shared other screens or windows. Try calling stopScreenCapture to stop the current sharing and start sharing the screen again.

    Parameters

    Returns number

Generated using TypeDoc