Avatars, Environment and Other Dynamic Variables

How Scenarios Control The Experience

Judit Vigh avatar
Written by Judit Vigh
Updated over a week ago

PCS Spark can deliver a highly dynamic learning experience, presenting various avatars, locations, facial expressions, verbal responses and physical exams. These are all driven by patient scenario content you can fully control with the PCS content authoring tools. In this article, we’ll describe how to set up your content to customize the above elements of the PCS Spark experience.

Avatar & Environment

With the Avatar selector you can set the looks of the patient. Each avatar has their default age, you can modify those within the available age range for the avatar.

The Environment selector will provide for the different environments for the patient.

Avatars and Age Ranges:

Age: 1-15 years

Age: 1-8 years

Age: 16-40 years

Age: 16-44 years

Age: 41-64 years

Age: 45-59 years

Age: 65-99 years

Age: 55-99 years





Ambulance (upright)

Ambulance (prone)


When you are ready to converse with your patient, make them start listening: either click the microphone icon (main controller button if in VR) or call out their first name (e.g. “Hello Veronica”) if Wake on Name is enabled for your virtual simulator in Simulator Settings.

When you are done conversing with your patient, make them stop listening with another click of the microphone icon (main controller button if in VR) or saying something similar to “Thank you, take care”. “Bye for now”, or “See you later”.

Voice Memo: press the voice memo button on the top of the logs (push and keep pushed the main controller button if in VR) to record a brief voice memo. Voice memos are transcribed and saved to the log.


Type in the response:

Will say:


Current year

<name> = <fullname>

Patient full name e.g. "Valerie Miller"


Patient first name / surname e.g. “Valerie”


Patient last name e.g.”Miller”


Patient age from Patient Editor - Basics


“April 11”


“April 11” and year of birth based on age field


Current date e.g. “28th”


Current month e.g. “March”


Current day e.g. “Tuesday”


Current season e.g. “Summer”


Current city location (based on allowing IP location)

<state> = <region> = <county>

Current state / county location (based on allowing IP location)


Current country location (based on allowing IP location)

Emotions, Gestures and Verbal Affects

Avatar keeps eye contact while conversing. Occasionally looks around, folds arms and crosses legs.

Add different emotions and gestures to the conversation responses by typing any of these facial expressions or gestures:

  • <normal>, <scared>, <angry>, <surprised>, <worried>, <exhausted>, <happy>, <serious>, <sad>

  • <nod>, <headshake>

Program different verbal affects to the conversation responses take make the patient laugh, sneeze for example, or pause while speaking:

  • <laugh>, <cough>, <sneeze>, <gasp>, <groan>, or <pause>

Did this answer your question?