VST 3 SDK  VST 3.6.14
SDK for developing VST Plug-in
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Macros Groups Pages
Frequently Asked Questions

Provides some answers to some common questions.

Kindly note that we do not provide individual SDK support.
If you have any questions the FAQ below cannot answer, please refer to the VST 3 SDK Forum.

Check VST 3 Licensing Issues for questions about licensing.


Q: How should I communicate between the 'Processing' and the 'User Interface'?

With the term 'Processing' we mean the code implementing the Steinberg::Vst::IAudioProcessor interface, and with 'User Interface' the editor component implementing the Steinberg::Vst::IEditController interface.

If you need to communicate the changes of parameters to the user interface, such as metering changes and peaks, you need to define the parameter as an exported type. The parameter then is associated with an ID. In the process function you can inform the host of changes by using the outputParameterChanges (from ProcessData). You add the parameter (ID) to a list that will be used by the host to send them back to the user interface at the correct time.

If you should need to exchange more data than just parameter changes, such as tempo, sample rate, FFT, Pitch Analysis, or any other data resulting from your processing, you can use the IMessage interface (see AGain example). However, you need to be careful and send the data from a 'timer' thread and not directly from the process function, for example, when sending from a 'process' call.

See also

Q: I want to implement an audio meter in my user interface. How do I do this?

See Q: How should I communicate between the 'Processing' and the 'User Interface'?

Q: How does the host send automation data to my VST 3 Plug-in?

Automation data is sent to the audio processing method as part of the data passed as a parameter to the IAudioProcessor::process (processData) method.

IAudioProcessor::process (processData)
IParameterChanges* paramChanges = processData.inputParameterChanges;

Automation data is transmitted as a list of parameter changes. This list always contains enough information to transmit the original automation curve from the host in a sample accurate way. Check the AGain example to see how it could be implemented.

See also
Parameters and Automation

Q: How report to the host that the Plug-in has new parameter titles?

Due to preset loading or user interaction the Plug-in may change its parameters names (title) (but not the number of them or their IDs). To inform the host about this change, the Plug-in should call from the editController its component handler with flag kParamTitlesChanged:

componentHandler->restartComponent (kParamTitlesChanged);

The host will rescan the parameter list and update the titles. Note: that with flag kParamValuesChanged only the parameters values will be updated.

Q: How receive MIDI Controllers from the host?

MIDI controllers are not transmitted directly to a VST component. MIDI as hardware protocol has restrictions that can be avoided in software. Controller data in particular come along with unclear and often ignored semantics. On top of this they can interfere with regular parameter automation and the host is unaware of what happens in the Plug-in when passing MIDI controllers directly.

So any functionality that is to be controlled by MIDI controllers must be exported as regular parameter. The host will transform incoming MIDI controller data using this interface and transmit them as normal parameter change. This allows the host to automate them in the same way as other parameters.

To inform the host about this MIDI CCs to Plug-in parameters mapping, the Plug-in should implement the IMidiMapping interface.
If the mapping has changed, the Plug-in should call IComponentHandler::restartComponent (kMidiCCAssignmentChanged) to inform the host about this change.

Q: How my parameter changes (from UI interaction) are send to the processor if the host does not process?

When a parameter is changed in the Plug-in UI by user action, the plug sends this change to the host with performEdit (do not forget to call beginEdit and endEdit), then the host has the responsibility to transfer this parameter change to the processor part: => if the audio engine is running (playing) this will be done in the next available process call => if the audio engine is not running, the host has to flush parameter changes from time to time by sending them to processor by calling process (with audio buffer set to null), in this case the Plug-in should only update the parameters changes without processing any audio. This is very important that the host supports this flush mechanism else by saving Plug-ins state (project/preset) the host will not get the correct updated one.


Q: How does Audio Processing Bypass work?

In order to implement audio process bypassing, the Plug-in can export a parameter which is additionally and exclusively flagged as having the attribute kIsBypass. When the user activates the Plug-in bypass in the host, like all parameter changes, this is sent as part of the parameter data passed to the Vst::Steinberg::IAudioProcessor::process method.

The implementation of the bypass feature is entirely the responsibility of the Plug-in:

The IAudioProcessor::process method will continue to be called. The Plug-in must take care of artifact-free switching (ramping, parallel processing or algorithm changes) and must also provide a delayed action if your Plug-in has a latency. Note: The Plug-in needs to save in its state the bypass parameter like any other parameters.

Q: Must the host deliver valid initialized Audio buffers if the associated bus is deactivated?

In a correctly implemented host, if an input or output bus exists in the host, and it has become disconnected from the Plug-in, the Plug-in will receive disconnection information (bus activation).

Additionally, a Plug-in with a disconnected input bus will continue to receive default silence buffers, just as a Plug-in with a disconnected output bus will continue to receive default nirvana buffers. When these deactivated buses are the last buses (for input or output), the host could not provide associated AudioBusBuffers, in this case the Plug-in should check numInputs and numOutputs and doesn't process these buses.

Q: Can the maximum sample block size change while the Plug-in is processing?

The max. sample block size (maxSamplesPerBlock) can change during the lifetime of a Plug-in, but NOT while the audio component is active. Therefore max. sample block size (maxSamplesPerBlock) can never change during or between processing calls while the Plug-in is active.

If the host changes the maximum sample block size it specifically calls the following:

Note that the ProcessData->numSamples which indicates how many samples are used in a process call could change from call to call, but never bigger than the maxSamplesPerBlock.

Q: Can the sample rate change while the Plug-in is processing?

No. Same as Q: Can the maximum sample block size change while the Plug-in is processing?

Q: Could the host call the process function without Audio buffers?

Yes, the host could call IAudioProcessor::process without buffers (numInputs and numOutputs are zeroed), in order to flush parameters (from host to Plug-in).

Q: What is a Side-chain?

In audio applications, some Plug-ins allow for a secondary signal to be made available to the Plug-in and act as a controller of one or more parameters in the processing. Such a signal is commonly called a Side-chain Signal or Side-chain Input.


If a recorded kick drum is considered well played, but the recording of the bass player's part shows that he regularly plays slightly ahead of the kick drum, a Plug-in with a 'Gating' function on the bass part could use the kick drum signal as a side-chain to 'trim' the bass part precisely to that of the kick.

Another application is to automatically lower the level of a musical background when another signal, such as a voice, reaches a certain level. In this case a Plug-in with a 'Ducking' function would be used - where the main musical signal is reduced while the voice signal is applied to the side-chain input.

A delay's mix parameter could be controlled by a side-chain input signal - to make the amount of delay signal proportional to the level of another.

The side-chain could be used as an additional modulation source instead of cyclic forms of modulation. From the Plug-in's perspective, side-chain inputs and/or outputs are additional inputs and outputs which can be enabled or disabled by the host.

The host (if supported) will provide to the user a way to route some signal paths to these side-chain inputs or from side-chain outputs to others signal inputs.

Q: How can I implement a Side-chain path into my Plug-in?

In AudioEffect::initialize (FUnknown* context) you must add the required bus- and speaker configuration of your Plug-in.
For example, if your Plug-in works on one input and one output bus, both stereo, the appropriate code snippet would look like this:

addAudioInput (USTRING ("Stereo In"), SpeakerArr::kStereo);
addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);

In addition, adding a stereo side chain bus would look like this:

addAudioInput (USTRING ("Aux In"), SpeakerArr::kStereo, kAux);

Q: My Plug-in is capable of processing all possible channel configurations.

What type of speaker arrangement should I select when adding buses?

Take the configuration your Plug-in is most likely to be used with. For a 5.1-surround setup that would be the following:

addAudioInput (USTRING ("Surround In"), SpeakerArr::k51);
addAudioOutput (USTRING ("Surround Out"), SpeakerArr::k51);

But when the host calls Steinberg::Vst::IAudioProcessor::setBusArrangements the host is informing your Plug-in of the current speaker arrangement of the track it was selected in. You should return kResultOk, in the case you accept this arrangement, or kResultFalse, in case you do not.

Note, if you reject a setBusArrangements by returning kResultFalse, the host calls Steinberg::Vst::IAudioProcessor::getBusArrangement where you have the chance to give the parameter 'arrangement' the value of the speaker arrangement your Plug-in does accept for this given bus. The host could later recall Steinberg::Vst::IAudioProcessor::setBusArrangements with the Plug-in wanted Arrangements then the Plug-in should return kResultOk

Q: How are speaker arrangement settings handled for FX Plug-ins?

After instantiation of the Plug-in, the host calls Steinberg::Vst::IAudioProcessor::setBusArrangements with a default configuration (depending on the current channel configuration), if the Plug-in accepts it (by returning kResultOK), it will continue with this configuration.
If not (by returning kResultFalse), the host asks the Plug-in for its wanted configuration by calling Steinberg::Vst::IAudioProcessor::getBusArrangement (for Input and Output) and then recall Steinberg::Vst::IAudioProcessor::setBusArrangements with the final wanted configuration.
Example of a Plug-in supporting only symmetric Input-Output Arrangements:

Host -> Plug-in : setBusArrangements (monoIN, stereoOUT)
Plug-in return : kResultFalse
Host -> Plug-in : getBusArrangement (IN) => return Stereo;
Host -> Plug-in : getBusArrangement (OUT) => return Stereo;
Host -> Plug-in : setBusArrangements (stereoIN, stereoOUT)
Plug-in return : kResultOk

Example of a Plug-in supporting only asymmetric Input-Output Arrangements (mono->stereo):

Host -> Plug-in : setBusArrangements (stereoIN, stereoOUT)
Plug-in return : kResultFalse
Host -> Plug-in : getBusArrangement (IN) => return Mono;
Host -> Plug-in : getBusArrangement (OUT) => return Stereo;
Host -> Plug-in : setBusArrangements (MonoIN, stereoOUT)
Plug-in return : kResultOk

Example of a Plug-in supporting only asymmetric Input-Output Arrangements (mono-> stereo to up 5.1):

Host -> Plug-in : setBusArrangements (5.1IN, 5.1OUT)
Plug-in return : kResultFalse
Host -> Plug-in : getBusArrangement (IN) => return Mono;
Host -> Plug-in : getBusArrangement (OUT) => return 5.1;
Host -> Plug-in : setBusArrangements (MonoIN, 5.1OUT)
Plug-in return : kResultOk
Host -> Plug-in : setBusArrangements (QuadroIN, QuadroOUT)
Plug-in return : kResultFalse
Host -> Plug-in : getBusArrangement (IN) => return Mono;
Host -> Plug-in : getBusArrangement (OUT) => return Quadro;
Host -> Plug-in : setBusArrangements (MonoIN, QuadroOUT)
Plug-in return : kResultOk

Q: My Plug-in has mono input and stereo output. How does VST 3 handle this?

There are two ways to instantiate a Plug-in like this.

Way 1
In AudioEffect::initialize (FUnknown* context) you add one mono and one stereo bus.
addAudioInput (USTRING ("Mono In"), SpeakerArr::kMono);
addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);
In case of Cubase/Nuendo being the host, the Plug-in, after being inserted into a stereo track, gets the left channel of the stereo input signal as its mono input. From this signal you can create a stereo output signal.
Way 2
In AudioEffect::initialize (FUnknown* context) you add one stereo input and one stereo output bus.
addAudioInput (USTRING ("Stereo In"), SpeakerArr::kStereo);
addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);
For processing, the algorithm of your Plug-in takes the left channel only, or creates a new mono input signal, by adding the samples of the left and right channels.

Q: How does it work with silence flags?

The silence flags are a bitmask where each bit corresponds to one channel of a bus (for example L and R for stereo bus). The host has the responsibility to clear the input buffers (set to zero) when it enables the silence flags (the output silence flags will be set by the host to no silence (=0)), on the other side the Plug-in, if it produces silence output, has the responsibility to clear (set to zero) its output buffers and to correctly set the output silence flags.

Q: How report to the host that the Plug-in latency has changed?

The Plug-in should call from the editController its component handler with flag kLatencyChanged:

componentHandler->restartComponent (kLatencyChanged);

The host will call Steinberg::Vst::IAudioProcessor->getLatencySamples () in order to check the new latency and adapt its latency compensation if supported.

Q: How report to the host that the Plug-in Arrangement has changed?

When loading a preset or with an user interaction, the Plug-in wants to change its IO configuration. In this case the Plug-in should call from the editController its component handler with flag kIoChanged:

componentHandler->restartComponent (kIoChanged);

The host will call Steinberg::Vst::IAudioProcessor::getBusArrangement (for Input and Output) in order to check the new requested arrangement and then the host will call Steinberg::Vst::IAudioProcessor::setBusArrangements (called in suspend state: setActive (false)) to confirm the requested arrangement.

Q: Can IAudioProcessor::setProcessing be called without any IAudioProcessor::process call?

Yes, it depends how the DAW is supporting its processing, the following call sequence is legal:

Q: How to make sure that a plug-in is always processed?

If your Plug-in always generates sound without need of any audio input, you can add the category "Generator" as subCategories (for example use kFxGenerator) or you can return kInfiniteTail in the function IAudioProcessor::getTailSamples

Q: Can IComponent::getState()/setState() could be called during processing?

Yes, setState and getState are called normally from the UI Thread when the Plug-in is used in a realtime context, in an offline context set/getState could be called in the same thread than the process call. Check the workflow diagram Audio Processor Call Sequence for more info about in which state which interfaces are called.

Q: How can a Plug-in be informed that it is currently processed in offline processing?

When a Plug-in will be used in an offline processing context (which is the case with Cubase 9.5/Nuendo 8 feature: Direct Offline Processing), its component will be initialized with IComponent::setIoMode (Vst::kOfflineProcessing) (see The Simple Mode).
The offline processing mode (passed in the process call) is used when:

- the user exports audio (downmix)
- direct offline processing feature

With IComponent::setIoMode (Vst::kOfflineProcessing) you are able to differentiate between export and DOP (Direct Offline Processing).

Q: What should I NOT call in the realtime process function?

A good practice is to avoid any library calls from this critical realtime process. If you have to use them, check if they are designed for realtime operation and do not contains any locking mechanism. Avoid any filesystem access, networks and UI calls, memory allocation and deallocation, take care by using of STL containers which could allocated memory behind the scene, preferred pattern like non-blocking memory pool, and delegate tasks to UI/Timer thread for doing such memory/file-network access jobs.


Q: The host doesn't open my Plug-in UI, why?

If you are not using VSTGUI, please check that you provide the correct object derived from EditorView or CPlugInView and that you overwrite the function isPlatformTypeSupported ().

Compatibility with VST 2.x or VST 1

Q: How can I update my VST 2 version of my Plug-in to a VST 3 version and be sure that Cubase will load it instead of my old one?

You have to provide a special UID for your kVstAudioEffectClass and kVstComponentControllerClass components, based on its VST 2 UniqueID (4 characters) and its Plug-in name like this:

static void convertVST2UID_To_FUID (FUID& newOne, int32 myVST2UID_4Chars, const char* pluginName, bool forControllerUID = false)
char uidString[33];
int32 vstfxid;
if (forControllerUID)
vstfxid = (('V' << 16) | ('S' << 8) | 'E');
vstfxid = (('V' << 16) | ('S' << 8) | 'T');
char vstfxidStr[7] = {0};
sprintf (vstfxidStr, "%06X", vstfxid);
char uidStr[9] = {0};
sprintf (uidStr, "%08X", myVST2UID_4Chars);
strcpy (uidString, vstfxidStr);
strcat (uidString, uidStr);
char nameidStr[3] = {0};
size_t len = strlen (pluginName);
// !!!the pluginName has to be lower case!!!!
for (uint16 i = 0; i <= 8; i++)
uint8 c = i < len ? pluginName[i] : 0;
sprintf (nameidStr, "%02X", c);
strcat (uidString, nameidStr);
newOne.fromString (uidString);

Note that if you are developing a new Plug-in and if you are using the VST 2 wrapper included in the SDK you do not need to use convertVST2UID_To_FUID, a VST 2 specific vendor call allows the host (Steinberg hosts since Cubase 4.0) to get from a VST 2 version a VST 3 UID.

// extracted code from vst2wrapper.cpp
VstIntPtr Vst2Wrapper::vendorSpecific (VstInt32 lArg, VstIntPtr lArg2, void* ptrArg, float floatArg)
switch (lArg)
case 'stCA':
case 'stCa':
switch (lArg2)
case 'FUID':
if (ptrArg)
if (vst3EffectClassID.isValid ())
memcpy ((char*)ptrArg, vst3EffectClassID, 16);
return 1;

Q: How can I support projects which were saved with the VST 2 version of my Plug-in?

The host will call IComponent::setState() and IEditController::setComponentState() with the complete FXB/FXP stream. You have to extract your old state from that.

Here the code to add in the VST 3 version when a VST 3 Plug-in replaces a VST 2 Plug-in in a Steinberg sequencer project:

static const int32 kPrivateChunkID = 'VstW';
static const int32 kPrivateChunkVersion = 1;
tresult PLUGIN_API MyVST3Effect::setState (IBStream* state)
IBStreamer stream (state);
stream.setByteOrder (kBigEndian);
// try to read if it was from old VST 2 based project/preset
int32 firstID = 0;
if (stream.readInt32 (firstID) && firstID == kPrivateChunkID)
FStreamSizeHolder sizeHolder (stream);
sizeHolder.beginRead ();
int32 version = 0;
stream.readInt32 (version); // should be 1
int32 bypass = 0;
stream.readInt32 (bypass); // here the saved Bypass (was saved separetely with VST 2)
if (bypass != 0)
mustSwitchToBypass = true; // delay bypass update if wanted
sizeHolder.endRead ();
else // this was not a VST 2 based but a real VST 3 project/preset
stream.seek (-4, kSeekCurrent);
// from here read the bank....
int32 result = readBank (&stream);'
return kResultTrue;

For Automation compatibility, you have to ensure that VST 3 parameter IDs have the same value than the indexes of their associated parameters in VST 2. Only with this condition the host can playback the automation. The parameter value has the same meaning between VST 2 and VST 3.

Q: In VST 2 the editor was able to access the processing part, named effect, directly. How can I do this in VST 3?

You can not and more importantly must not do this. The processing part and user interface part communicate via a messaging system.
See Q: How should I communicate between the 'Processing' and the 'User Interface'? for details.

Q: Does VST 3 implement methods like beginEdit and endEdit known from VST 2?

Yes and it is essential to support this for automation. For details, please see Parameters and Automation

Q: Does VST 3 include variable Input/Output processing like processVariableIo of VST 2?

Not in version 3.1.0, we plan something in this direction later. (Note: this variableIO processing was for example for time stretching Plug-ins).

Q: What is the equivalent to the VST 2 kPlugCategOfflineProcess?

No, VST 3 doesn't support offline processing like it did in VST 2 (this interface was exclusively used by Wavelab). But it is possible to use VST 3 Plug-ins in an offline context (this means that the process function could be called faster than real time : for example during an export or a batch processing). If the Plug-in doesn't support faster than realtime, it should add kOnlyRealTime to its category.


Q: How does persistence work?

An instantiated Plug-in often has state information that must be saved in order to properly re-instantiate that Plug-in at a later time. A VST 3 Plug-in has two states which are saved and reloaded: its component state and its controller state.

The sequence of actions for saving is:

  • component->getState (compState)
  • controller->getState (ctrlState)

The sequence of actions for loading is:

  • component->setState (compState)
  • controller->setComponentState (compState)
  • controller->setState (ctrlState)

In the latter sequence you can see that the controller part will receive the component state. This allows the 2 parts to synchronize their states.

Q: What's the difference between IEditController::setComponentState and IEditController::setState?

After a preset is loaded, the host calls Steinberg::Vst::IEditController::setComponentState and Steinberg::Vst::IComponent::setState. Both delivering the same information. Steinberg::Vst::IEditController::setState is called by the host so that the Plug-in is able to update its controller dependent parameters, e.g. position of scroll bars. Prior to this, there should have been a call by the host to Steinberg::Vst::IEditController::getState, where the Plug-in writes these various parameters into the stream.

See also
Q: How does persistence work?


Q: How is a normalized value converted to a discrete integer value in VST 3?

A detailed description of parameter handling is provided here: Parameters.

Q: What is the return value tresult?

Almost all VST 3 interfaces return a tresult value. This integer value allows to return to the caller more differentiated error/success information than just a boolean value with its true and false.

The various possible values are defined in funknown.h. They are the same values as used in COM. Be careful when checking a tresult return value, because on success kResultOk is returned which has the integer value 0:

// this is WRONG!!!!!
if (component->setActive (true))
// this is CORRECT!!!!!
if (component->setActive (true) == kResultOK)
// or
// this is CORRECT too!!!!!
if (component->setActive (true) != kResultOK)
// error message....

Q: Which version of Steinberg Sequencers support VST 3?

In order to load VST 3 Plug-ins you need at least :

  • For VST 3.0.0 Cubase/Nuendo 4.1.2 is needed.
  • For VST 3.0.1 features Cubase 4.2 is needed.
  • For VST 3.5.0 features Cubase 6.0 is needed.

Q: How are the different Speakers located?

Here's an overview of the main defined speaker locations (Check enum Speakers and namespace SpeakerArr). A SpeakerArrangement is a bitset combination of speakers. For example:

=> representing in hexadecimal 0x03 and in binary 0011.

Q: Why do Plug-ins need subcategories?

When you export your Plug-in in the factory instance (check againentry.cpp: DEF_CLASS2), you have to define a subcategory string (could be a combination of more than one string: like "Fx|Dynamics|EQ" for example).

Currently the subcategory string is used by Cubase/Nuendo to organize the Plug-ins menu like this:

Computation of Folder Name (SubCategories => folder in menu)
      "Fx"                        => "Other"
      "Fx|Delay"                  => "Delay"
      "Fx|Mastering|Delay"        => "Mastering"
      "Spatial|Fx"                => "Spatial"
      "Fx|Spatial"                => "Spatial"
      "Instrument|Fx"             => if used as VSTi "" else if used as Insert "Other"
      "Instrument|Sampler"        => "Sampler"
      "Fx|Mastering|Surround"     => "Mastering\Surround"
      "Fx|Mastering|Delay|Stereo" => "Mastering\Stereo"
      "Fx|Mastering|Mono"         => "Mastering\Mono"

This string should only be a hint what type of Plug-in it is. It's not possible to define all types. If you have wishes for new categories, please discuss them in the VST Developer Forum (https://sdk.steinberg.net) and we could add them to future versions of the SDK.

Q: Is it possible to define a Plug-in as Fx and Instrument?

Yes it is possible, you can use the CString kFxInstrument ("Fx|Instrument") as subcategories (check DEF_CLASS2). In this case Steinberg Sequencers will allow the user to load it as an effect and as an instrument. The Plug-in as instrument (in case of current Steinberg Sequencers <= 5.5) will not enable the Audio Input buses.

Q: Is it possible to ask a Plug-in about which speaker arrangements are supported?

No, not before the Plug-in is instantiated, there is no way to ask the factory about which arrangements are supported for a given Plug-in. The host could instantiate the Plug-in and before start processing then set some different speaker Arrangements and check if the Plug-in supports them.

Q: Which version of Steinberg Sequencers support VST 3 Note Expression?

In order to use Note Expression with VST 3 Plug-ins you need at least Cubase/Nuendo 6.0.

Q: When compiling for Mac AudioUnit, I have a compiler error in AUCarbonViewBase.cpp. What can i do?

Due to an issue in the Mac CoreAudio SDK, not yet fixed by Apple, you have to apply a small patch to the file AUCarbonViewBase.cpp (located in CoreAudio/AudioUnits/AUPublic/AUCarbonViewBase):
=> Change:

HISize originalSize = { mBottomRight.h, mBottomRight.v };


HISize originalSize = { static_cast<CGFloat>(mBottomRight.h), static_cast<CGFloat>(mBottomRight.v) };

Q: How can i develop a SurroundPanner Plug-in which is integrated in Nuendo as Panner?

In order to make a surroundPanner Plug-in selectable as panner (Post-fader) in Nuendo, this Plug-in should have as subCategories : kSpatial or kSpatialFx (in order to use it as insert too). For example:

DEF_CLASS2 (INLINE_UID_FROM_FUID(Steinberg::Vst::SPannerProcessor::cid),
FULL_VERSION_STR, // Plug-in version (to be changed)

Be sure that you overwrite the function "tresult PLUGIN_API setBusArrangements ..." which allows you to get the current bus arrangements.

Q: How can i validate my Plug-in?

You have the Validator command line which could be automatically called after you compile your Plug-in (when you use the cmake defined by the SDK). This will apply some standard validations. The 2cd validation tool is the VST 3 Plug-in Test Host : check its menu entry: View => Open Plug-In Unit Tests Window


Copyright ©2019 Steinberg Media Technologies GmbH. All Rights Reserved. This documentation is under this license.