COP Status Reply

The reply to a VIRGO status request may contain a new JSON configuration that should be applied to VIRGO or it may be empty. The configuration may be changed in one of two different ways:

200 - State Change

VIRGA responds with an HTTP status code 200 if it has determined that the configuration stored in virgod is not up-to-date with respect to the configuration stored in VIRGA. The body of the reply should contain the new configuration and the new modification date.

204 - No Change

VIRGA responds with an HTTP status code 204 if it has determined that the configuration stored by virgod is up-to-date and requires no change. VIRGA does this determination by comparing the “mod-date” sent by virgod with the “mod-date” stored in its own persistent store.

Delta Updates

The following code block shows the outline of a delta update:

{
   // [required][epoch time] The new modification date associated with the new state
   "mod-date": "767878"
   
 
 
   // [optional] Tells Virgo how to apply the new state to its current state:
   // "current": means that the new state should be applied on top of the current virgo state. This is the default behavior.
   // "initial": means that Virgo should FIRST reset its state back to the factory settings before it applies the new state. This allows you to reset virgo
   "relative-to": "current"
  
    
   // [required] Tells virgo that this is a delta update that contains changes which should be applied relative to the current configuration.
   "apply-as": "delta"
  
  
   // [optional] The new global state.
   // The current global state is retained if no new global state is provided.
   "global": {
      "status-interval": 200    // [milliseconds] status reporting interval in ms (default: 500)
      ...
   }
  
  
   // [optional] Specifies which feeds should be removed. Note that removals are always
   // carried out before additions.
   "feed.removals" = [
      "video_1", "video_2", ...
   ]
    
    
   // [optional] Specifies which feeds should be added.
   "feed.additions" = {
      "camera_1": { ... }
      "camera_2": { ... }
      ...
   }
    
    
   // [optional] Specifies which feeds should be updated.
   "feed.updates" = {
      "camera_1": { ... }
      "camera_2": { ... }
      ...
   }
}

Full Updates

Note that you should always prefer delta updates over full updates because full updates are inherently inefficient and suffer from race conditions.

The following code block shows the outline of a full configuration update:

{
   // [required][epoch time] The new modification date associated with the new state
   "mod-date": "767878"
   
 
 
   // [optional] Tells Virgo how to apply the new state to its current state:
   // "current": means that the new state should be applied on top of the current virgo state. This is the default behavior.
   // "initial": means that Virgo should FIRST reset its state back to the factory settings before it applies the new state. This allows you to reset virgo
   "relative-to": "current"
  
    
   // [optional] Tells virgo that the update is a full update that should replace the current configuration.
   "apply-as": "full"
  
  
   // [optional] The new global state.
   // The current global state is retained if no new global state is provided.
   "global": {
      "status-interval": 200    // [milliseconds] status reporting interval in ms (default: 500)
      ...
   }
  
  
   // [optional] The new per-feed state. This is a dictionary. The dictionary key is the name of a feed
   // and the value is another dictionary which contains the feed's new state.
   // The current feed state is retained if no new per-feed state is provided.
   "feeds": {
      "camera_1": { ... }
      "camera_2": { ... }
      ...
   }
}

Configuration Sections

The configuration is organized into sections. Sections are optional. A section which is not mentioned in the reply is not applied and virgod retains the currently active state for this section. This is true for both “full” and “delta” “apply-as” modes.

A status reply message may contain the following sections:

Section apply-as Description
global delta, full Contains state that applies to the virgod daemon itself.
feeds full Contains per-feed state information.
feed.additions delta Contains dictionaries of feeds that should be added. See the feeds section below for a description of a feed dictionary.
feed.removals delta Contains the names of feeds that should be removed. Note that this is an array of feed names.
feed.updates delta Contains dictionaries of feeds that should be updated with new state. See the feeds section below for a description of a feed dictionary.
log delta, full Contains information to configure the logging behavior.
update delta, full Contains information about the version to which VIRGO should be upgraded or downgraded.

The “feed.xxx” sections are applied in the order “feed.removals” followed by “feed.additions” and finally “feed.updates”.

Global Section

The following properties are supported in the global section which contains configuration information that applies to VIRGO itself:

Property Type Default Description
status-interval Milliseconds 500 Status reporting time interval in ms.

Feeds Section

The following properties are supported in the feeds section which contains feed-specific configuration information:

Property Type Default Description
directory String? client ID Directory name
source String? client ID Source name
site String? client ID Site name
enabled Bool false Marks the feed as enabled or disabled.
input.type String The type of feed input. Must be “stream”.
input.loop Bool false Enables looping of the feed input. Only video file-based feeds support looping. Ignored for cameras
input.video-clock.enabled Bool false Enables enforcement of the video clock. Video files will be processed as fast as possible if the video clock is turned off.
input.lens-correction.enabled Bool false Enables or disables lens correction for the camera.
input.lens-correction.k1 Float 0 The “k1” lens correction factor.
input.lens-correction.k2 Float 0 The “k2” lens correction factor.
input.mirroring.enabled Bool false Whether the video image should be mirrored before detection and recognition operations are executed.
input.rotation.angle Int 0 Whether the video should be rotated before detection and recognition operations are executed. Valid values are 0, 90, 180, and 270.
input.crop-rectangle.enabled Bool false When this is true the defined crop rectangle is used for the camera feed. The crop rectangle is specified in a normalized coordinate system, which means the rectangle is (0, 0) x (1, 1).
input.crop-rectangle.left Double 0 The normalized left coordinate relative to the video of where the crop rectangle origin should be.
input.crop-rectangle.top Double 0 The normalized top coordinate relative to the video of where the crop rectangle origin should be.
input.crop-rectangle.width Double 1 The normalized width value relative to the video of how big the crop rectangle size should be.
input.crop-rectangle.height Double 1 The normalized height value relative to the video of how big the crop rectangle size should be.
input.contrast-enhancement.enabled Bool false Enables contrast enhancement of the input video frame.
input.contrast-enhancement.low-light-threshold Double 0.02 Low-light-threshold for contrast enhancement.
input.contrast-enhancement.exposure-boost Double 0 Exposure boost for contrast enhancement.
input.contrast-enhancement.detection-only Bool false If true then contrast enhancement is applied to the image which is handed off to the face detector only. If false then contrast enhancement is applied to the video frame as delivered by the camera. Consequently the contrast enhancement effect is visible in the video preview if this option is off but not if it is on.
accelerator String “auto” The type of acceleration that a feed should use. See the table “Feed accelerator types” below for a list of the supported acceleration types.
accelerator.gpu-id Int The GPU identifier to use when GPU acceleration is in use. This is only used if the “accelerator” property is set to gpu or auto (and gpu is used). If this is specified this will force the specific GPU to be used and if failure occurs it will fallback to CPU. This is an advanced setting that should only be used in very specific cases.
statistics.enabled Bool false Whether VIRGO should record and report statistics for this feed.
detector.detect-badges Bool false Whether detection of badges should be enabled for this feed.
detector.maximum-input-resolution-badges Int 4320 Maximum resolution of the Input image. Bigger images are scaled down (aspect-ratio preserving) to this resolution before detection.
detector.minimum-searched-badge-size Int 20 The badge detector is advised to search for badges of at least this size. This value is applied while searching the image.
detector.minimum-required-badge-size Int 0 The minimum size of badges to accept from the detector. Only badges with at least this size are eligible for recognition.
detector.detect-faces Bool true Whether detection of faces should be enabled for this feed.
detector.minimum-searched-face-size Int 80 The face detector is advised to search for faces of at least this size. This value is applied while searching the image.
detector.minimum-required-face-size Int 0 The minimum size of faces to accept from the detector. Only faces with at least this size are eligible for recognition.
detector.maximum-input-resolution Int 720 Maximum resolution of the Input image. Bigger images are scaled down (aspect-ratio preserving) to this resolution before detection.
detector.maximum-concurrent-detections Int 0 The maximum number of concurrent detections to allow. 0 means to automatically set this.
detector.detect-people Bool false Whether detection of people should be enabled for this feed. This detects any part of a person’s body and not just the face.
detector.minimum-required-person-to-screen-height-proportion Double 0 Specifies the ratio of the person to the screen height. This can be between 0 - 1 and allows for decimal precision. For example, if you don’t want the person to show up unless they are greater than 25% of the image height then specify a value of 0.25.
detector.minimum-consecutive-detections-required-person Int 0 This is the number of consecutive detections that are required before reporting that the person (based on object id) was actually detected and can be used to filter out false positives.
detector.detect-people-every-n-frames Int 1 This can be used to avoid running person detection on every frame. Since person detection requires a lot of GPU processing if the hardware is not powerful enough this value can be changed so that we only attempt to detect people every Nth frame to save processing power to keep up with realtime detection.
detector.person-detection-threshold Double 0.4 This is the detection threshold to use when matching objects. The higher the threshold the more strict the matching will be and the higher the confidence will be that the actual object matches.
detector.person-separation-threshold Double 0.45 This threshold controls the object separation when the objects are overlapping. This determine how much overlap is needed before no longer detecting the object with the weaker footprint.
detector.detect-people-model String “balanced” Valid values: “max-accuracy” - Use a larger model for better accuracy, but the speed will be slower. “max-speed” - Use a smaller model for faster speed, but the accuracy will be lower. “balanced” - Use a larger model for better accuracy, but the precision will be slightly lower resulting in faster speeds than the “max-accuracy” model without sacrificing too much accuracy.
detector.initial-face-selection-threshold Double 0.8 The initial face candidate threshold that is used during face detection.
detector.middle-face-selection-threshold Double 0.85 The middle face candidate threshold that is used during face detection.
detector.final-face-selection-threshold Double 0.9 The final face candidate threshold that is used during face detection.
recognizer.minimum-face-size Int 120 The minimum size of faces to detect. This value is applied after searching the image.
recognizer.minimum-face-size-merging Int 220 The minimum resolution a recognition candidate must have in order to allow merging.
recognizer.minimum-face-size-identification Int 220 The minimum resolution that a recognition candidate image must have in order to allow the insertion of the candidate image into the Cloud database.
recognizer.minimum-center-pose-quality Float 0.05 The minimum center pose quality that a face image must have before we try to recognize the face.
recognizer.pose-configuration-identification-enabled Bool false If this is true then pose configuration is enabled for identification. The pose configuration allows for replacing center pose quality with advanced parameters such as yaw, pitch and roll. If this is true then recognizer.minimum-center-pose-quality is ignored and the pose configuration parameters are used instead. Currently these are recognizer.maximum-yaw-identification, recognizer.maximum-pitch-identification, and recognizer.maximum-roll-identification.
recognizer.maximum-yaw-identification Double 0.4 This is the maximum yaw value used to determine if the face is looking straight. The yaw value is the side to side movement of the face.
recognizer.maximum-pitch-identification Double 0.4 This is the maximum pitch value used to determine if the face is looking straight. The pitch value is the forward/backward movement of the face.
recognizer.maximum-roll-identification Double 0.15 This is the maximum roll value used to determine if the face is looking straight. The roll value is the side to side tilt movement of the face.
recognizer.minimum-center-pose-quality-merging Float 0.59 The minimum CPQ that a recognition candidate must have in order to allow merging.
recognizer.minimum-center-pose-quality-identification Float 0.59 The minimum CPQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database.
recognizer.minimum-face-contrast-quality Float 0.2 The minimum face contrast quality that a face image must have before we try to recognize the face.
recognizer.minimum-face-contrast-quality-merging Float 0.59 The minimum FCQ that a recognition candidate must have in order to allow merging.
recognizer.minimum-face-contrast-quality-identification Float 0.59 The minimum FCQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database.
recognizer.identity-recognition-threshold Float 0.54 The identity recognition threshold.
recognizer.minimum-face-sharpness-quality Float 0.3 The minimum face sharpness quality that a face image must have before we try to recognize the face.
recognizer.minimum-face-sharpness-quality-merging Float 0.59 The minimum FSQ that a recognition candidate must have in order to allow merging.
recognizer.minimum-face-sharpness-quality-identification Float 0.59 The minimum FSQ that a recognition candidate must have in order to allow the insertion of the candidate image into the Cloud database.
recognizer.maximum-clip-ratio Float 0.2 The maximum clip ratio on either side the recognition candidate might have.
recognizer.maximum-clip-ratio-identification Float 0 The maximum clip ratio on either side the insertion candidate might have.
recognizer.detect-gender Bool false Whether to enable the detection of gender information.
recognizer.detect-age Bool false Whether to enable the detection of age information.
recognizer.detect-sentiment Bool false Whether to enable the detection of sentiment information.
recognizer.learning-enabled Bool false Whether the recognizer is allowed to learn new identities.
recognizer.maximum-concurrent-recognitions Int 0 The maximum number of concurrent recognitions to allow. 0 means to automatically set this.
recognizer.detect-occlusion Bool false Whether to enable occlusion detection during recognition.
recognizer.maximum-occlusion Double 0.5 Valid values are in the range of 0.0 - 1.0. This is the maximum occlusion value that is allowed when inserting new recognition candidate images into the Cloud database. If the face is occluded with a value greater than this then the face will not be added, but if it is less than or equal to this value then it will be added.
recognizer.learn-occluded-faces Bool false Whether to enable learning of occluded faces regardless of the maximum occlusion setting. If this is true then the server configuration will be used, which by default doesn’t do any occlusion detection.
recognizer.identity-proximity-threshold-allowance Double 0.13 The identity recognition threshold proximity allowance. The lower the value to more strict recognition is.
tracker.maximum-linger-frames Int 30 Determines for how many frames more we continue to keep a tracked face around after we have failed to detect it in the most recent frame. This makes the tracker resilient against intermittent loss of face.
tracker.minimum-number-identical-recognitions-lock Int 1 The minimum number of consecutive recognition attempts that we must run and produce the same person identity before we lock onto this identity.
tracker.minimum-required-consecutive-badge-detections Int 0 This is the number of consecutive detections that are required before reporting that the object (based on object id) was actually detected and can be used to filter out false positives.
tracker.reconfirmation-interval Int 1000 Identity reconfirmation time interval in ms.
tracker.initial-recognition-attempts Int 3 The number of initial recognition attempts to make on an unrecognized person as fast as possible.
tracker.failed-recognition-back-off-interval Milliseconds 340 After making the initial recognition attempts as fast as possible back up this amount for each subsequent recognition to slow down. This goes on until the retry interval is reached.
tracker.failed-recognition-retry-interval Milliseconds 1 The interval in which to run recognition requests if the face has not been recognized.
tracker.identity-relearn-interval-days Float 0 Updates the identity only in the case where the identity currently saved is older than the updated identity.
tracker.identity-update-better-image Bool false Updates the identity in the case where the identity currently saved is of lower quality (in all aspects) than the updated identity.
tracker.max-position-change-relative-to-face Int 115 The maximum position change, specified in percentage relative to the object size, to continue tracking.
tracker.max-size-change-relative-to-face Int 50 The maximum size change, specified in percentage relative to the object size, to continue tracking.
tracker.minimum-number-identical-recognitions-learn Int 2 This is the number of consecutive recognitions that need to occur before adding a new identity to the system.
tracker.enable-face-size-correlation Bool true Enable face correlation of tracked faces, which compares detected faces looking for a change in area.
tracker.enable-face-bounds-prediction Bool true Enable face bounds prediction, which predicts which direction the face is moving to maintain tracking.
tracker.stop-tracking-on-failed-re-recognition Bool false If recognition fails when re-recognizing a person then delete the identity that was created.
tracker.reconfirm-identity-in-video-on-every-key-frame Bool false When a key frame is encountered in a video file all the faces that are being tracked are marked as unconfirmed so that their identities are reconfirmed to make sure they are the same person. This setting only applies to video files and not live video. If a video file does not represent recorded live video then this can typically be set to true for better tracking during scene changes.
tracker.min-failed-recognitions-to-stop-tracking-identity Int 3 When a face is being tracked recognitions are continually confirming the identity. The identity is also being verified if it is transferred from a person object. In these cases, if the recognition or verification fails this number of consecutive times then the identity will be reset and no longer associated with the face because we are no longer sure it is the same identity.
tracker.detect-unauthorized-movement.person.left Bool false Enabled unauthorized movement detection in the left direction.
tracker.detect-unauthorized-movement.person.left-distance Double 0.1 The distance the tracked object is allowed to move to the left. The distance is provided in relative terms as a fraction of screen width in range 0 - 1.
tracker.detect-unauthorized-movement.person.right Bool false Enabled unauthorized movement detection in the right direction.
tracker.detect-unauthorized-movement.person.right-distance Double 0.1 The distance the tracked object is allowed to move to the right. The distance is provided in relative terms as a fraction of screen width in range 0 - 1.
tracker.detect-unauthorized-movement.person.up Bool false Enabled unauthorized movement detection in the upward direction.
tracker.detect-unauthorized-movement.person.up-distance Double 0.1 The distance the tracked object is allowed to move to the up. The distance is provided in relative terms as a fraction of screen height in range 0 - 1.
tracker.detect-unauthorized-movement.person.down Bool false Enabled unauthorized movement detection in the downward direction.
tracker.detect-unauthorized-movement.person.down-distance Double 0.1 The distance the tracked object is allowed to move to the down. The distance is provided in relative terms as a fraction of screen height in range 0 - 1.
reporter.enabled Bool true Enables or disables event reporting.
reporter.report-event-face Bool true Enables the inclusion of face thumbnails in event reports.
reporter.report-event-scene Bool false Enables the inclusion of scene images in event reports.
reporter.minimum-event-duration-identified Milliseconds 0 The minimum allowed recognized person event duration in seconds. Events below this value will not be reported.
reporter.minimum-event-duration-unidentified Milliseconds 0 The minimum allowed unrecognized person event duration in seconds. Events below this value will not be reported.
reporter.delay Milliseconds 0 Delay the event reporting to the server by this amount in seconds.
reporter.events-initial-date-offset EpochTime nil When processing a video file for events this value can be used to set the initial date offset to use for the events being processed. By default video events use the timestamps.
reporter.report-unrecognizable-events Bool true Reports events for people that are not recognizable.
reporter.report-stranger-events Bool true Reports events for people that are strangers. These are people not registered by the system after running facial recognition on the face.
reporter.report-speculated-events Bool true Reports events for speculated people. This means faces that aren’t a 100% match, but are close.
reporter.update-images Bool true Update the thumbnail images with higher quality images during the course of the event if possible.
reporter.update-in-progress-event-properties Bool false If this is enabled then any event properties that change will be updated a the specified interval. Many properties do change periodically, such as images or other averages that are continually computed.
reporter.update-in-progress-event-interval Milliseconds 1000 This specifies the interval time in which to update event properties that change.
reporter.stranger-events.only-if-occluded Bool false This specifies whether only occluded stranger events should be reported. By default stranger events are only generated if the face is not occluded, if occlusion detection is enabled, otherwise they are generated when the face meets the identification image quality metrics. If this option is set to true then stranger events will be reported only if the face is occluded.
reporter.report-secondary-events Bool false Reports secondary events. Secondary events are events that are associated with a primary event via the rootEventId property in the event. It is usually preferred to only report the primary events and the secondary events need to only be reported if there is more detail needed. If this is disabled then all events with a rootEventId property set to a primary event will not be reported. Only events with rootEventId not set to anything will be reported, which are the primary events.
capture.lease-date EpochTime 0 The date of the capture lease
capture.size Int 480 Specifies size of the smaller dimension of the image that will be sent
capture.maximum-frames Int 1200 If > 0, enables the capture of “max-frames” frames; if 0, disables capture
capture.frame-delay Milliseconds 200 Wall-clock time between consecutive frame captures. If this value is set to 0 then VIRGO will capture frames as fast as the native frame rate is playing the video.
capture.deposite-base-url URL? none The base URL to which captured frames should be posted.
recognizer.detect-smile-action Bool false Enables the smile action recognizer.
recognizer.smile-pre-delay Milliseconds 100 The amount of time that there should be no smile.
recognizer.smile-duration Milliseconds 0 The amount of time that the smile should last.
recognizer.smile-identity-threshold-boost Double 0.13 The smile threshold to boost temporarily during the smile action.
recognizer.smile-thresholds-enabled Bool false Enables the smile threshold values.
recognizer.smile-threshold-neutral Double -0.1 The threshold in which there is no smile.
recognizer.smile-threshold-smiling Double 0.7 The threshold in which there is a smile.
recognizer.detect-pose-action Bool false Enables the pose liveness action recognizer.
recognizer.pose-action-min-center-pose-quality Double 0.5 The minimum center pose quality to use when detecting the initial center pose.
recognizer.pose-action-max-profile-pose-quality Double 0.26 The maximum center pose quality to use when detecting the final profile pose.
recognizer.pose-action-min-profile-confidence-start Double 0.35 The minimum profile pose confidence to allow during the initial center pose detection phase.
recognizer.pose-action-max-profile-confidence-end Double 0.60 The maximum profile pose confidence to allow during the final profile pose detection phase.
recognizer.pose-action-min-transtion-poses Int 2 The minimum number of required center pose samples during the transition from center to profile pose.
recognizer.pose-action-required-confirmations Int 3 The number of consecutive confirmations required to enter the initial center pose detection phase.
recognizer.pose-action-min-profile-similarity Double 0.86 The minimum similarity score required when verifying the final profile pose.
recognizer.pose-action-min-detections-per-second Int 15 The minimum number of frames per second that is required during the process.
recognizer.pose-action-max-cpq-jump-after-discontinuity Double 0.15 The maximum change between samples while the pose is changing from center to profile if lingering.
recognizer.pose-action-max-cpq-jump-in-continuity Double 0.18 The maximum change between samples while the pose is changing from center to profile.
recognizer.pose-action-max-profile-pose-roll Double 0.3 The maximum roll threshold in either direction in which the face can rotate when determining whether the face is in profile pose.
recognizer.pose-action-min-profile-pose-yaw Double 0.81 The minimum profile pose yaw value that is required during the final profile pose detection phase.
recognizer.pose-action-profile-pose-required-confirmations Int 1 The number of consecutive confirmations required to enter the final profile pose detection phase.

Feed Properties for “Stream” Inputs

Property Type Default Description
input.stream.url URL The video stream URL. The URL must point to a RTSP, HTTP, or FILE stream.
input.stream.name String A friendly name used for display purposes.
input.stream.id String Identifier used to connect to a stream if the URL is blank.
input.stream.rtsp.transport String udp The transport protocol that should be used while accessing the RTSP video stream. Must be one of “udp”, “tcp”, or “udp-multicast”.

Feed Acceleration Types

Property Description
auto VIRGO will automatically pick the best available acceleration type. For example VIRGO will assign the feed to one of the available GPUs if there is still processing capacity available. Otherwise VIRGO will assign the feed to the CPU.
cpu The feed should exclusively run on the CPU and not use any GPU even if a GPU would be available.
gpu The feed should exclusively run on a GPU and not use the CPU for video decoding, graphics processing, or detection. The feed will fail if no GPU is available.

Feed.Additions, Feed.Removals, and Feed.Updates Sections

The feed.removals section lists the names of the feeds that should be removed. This section is always applied first. The feed.additions sections lists the descriptions of the feeds that should be added. This section is always applied after removals. Finally the feed.updates section lists the description of feeds that should be updated with new state and this section is always applied last.

Log Section

The following properties are supported in the log section which contains logging related configuration:

Property Type Default Description
lease-date EpochTime 0 The date of the log lease.
deposite-url URL? none The URL to which the most recently recorded log statements should be posted.
deposite-interval Milliseconds 5000 The minimum time interval between consecutive log deposit operations.
levels Dictionary<String, String> empty A dictionary which maps a log package name to a log level.

See Logging for a description of the logging mechanism.

Update Section

The following properties are supported in the update section which contains information related to upgraded or downgrading the currently installed VIRGO version:

Property Type Default Description
version Version none The version to which the current VIRGO installation should be upgraded or downgraded to.
download-url URL none The URL from which VIRGO should fetch the update archive.
progress-url URL? none The URL to which update events should be sent. Update events are sent periodically at a time interval equal to “progress-interval”.
progress-interval Milliseconds? 1000 The time interval at which update events should be sent to the “progress-url” URL
log.enabled Bool? false Set to true to enable the inclusion of logging information in the update events.

See Software Updates for a description of the updates mechanism.

“relative-to” and resetting the current state

The current state stored in VIRGO may be reset back to the factory defaults by including the “relative-to” JSON key with a value of “initial”. This causes VIRGO to delete its persistently stored state and to reload its state from the factory defaults. This action also resets the modification date back to 0. Virgo then applies the new state as listed in the status message reply. This new state together with the new modification date is then persistently stored.

“apply-as” and delta vs full updates

VIRGO is able to interpret the state included in a status message reply as either the description of a complete (full) configuration or as a delta relative to the current VIRGO configuration. The configuration included in a status message reply is by default interpreted as a full configuration update which completely replaces the current state. You may change this behavior by adding a “apply-as” to the reply:

An “apply-as”: “delta” mode means that you may leave out key-value pairs which are not supposed to change. VIRGO automatically reuses the current value for any key that is missing in the new global or per-feed state. Here is an example:

This is the current state of the "camera_1" feed as stored by Virgo. Note that we show only some of the state here, however you should assume that all state if fully defined:
  
{
   "source" : "foo"
   "site": "bar"
   ...
   "lens-correction.enabled": false
   ...
   "detector.minimum-searched-face-size": 80
   ...
   "recognizer.detect-age": false
   ...
   "tracker.maximum-linger-frames": 20
   ...
   "reporter.enabled": true
   ...
   "capture.max-frames": 0
   ...
}
  
Now all we want to do is to enable lens correction. We don't want to change any other feed state. To achieve this, we simply send a new feed state with just those keys and values that we want to change. Virgo will retain all other values as they are:
  
Status message reply:
{
   "mod-date": 878789
   "apply-as": "delta"
   "feeds": {
      "camera_1": {
         "lens-correction.enabled": true
         "lens-correction.k1": 0.6
         "lens-correction.k2": 0.7
      }
   }
}
 
After the update:
  
All state remains as it was before the update except that the "lens-correction.enabled", "lens-correction.k1" and "lens-correction.k2" key-value pairs have been applied to the previous state and the modification date has been advanced to the new date.

Note that “delta” updates are generically preferred over “full” updates because:

See Also