Hello there! We are conducting a survey to better understand the user experience in making a first edit. If you have ever made an edit on Gamepedia, please fill out the survey. Thank you!
Conversations are used by story writers to write and organize conversation and dialog. It can also be used to create interactive character dialogue that allows a player to select from a list of replies. Conversations can be created purely through triggers, or created in data and run via triggers. Cinematic mode must be enabled using the Cinematic Mode action to view conversation choices.
- Exists to organize large amounts of dialog and speech between characters, such as Jim Raynor and Kerrigan Transmissions.
- Adding Lines will auto-generate placeholders for sound. Use the proper workflow to make the best use out of this feature.
- Conversation States can be used to fork Conversations.
- Conversations are played using triggers.
- Each Conversation-line can be linked to a Character.
- Every time you add a line and define its Character a Sound object is created. Its Id will be a combination of the character name and its index in the Conversation list. The reason for this that it speeds up the workflow when it comes to linking sounds to the text.
- The presenters portrait seems to be definable in the character settings. However, this is bugged and it is recommended that the portrait is defined in the Transmission directly.
- Conversations manipulates portrait animations. The facial expression of the speaking Character is modified in the animations section of each Line. Mouth movement is defined in .xfa files which are imported together with the sound-file. These files are hidden in the editor and generated through external proprietary software. The rest of the portrait model is animated using Animations.
- Lip Synchronization can be generated by adding a campaign sound file to the list of sounds in the auto-generated Sound object. These .ogg or .wav files have hidden .xfa files linked to them.
- There are options in Character settings for portrait, but it seems not to work. It is recommended that the portrait to be displayed is defined using Transmission triggers and actions.
Create instances of Characters which will present the dialog. Create a Conversation instance, add a Group and fill it with Lines. For each Line, choose a character in the Speaker list. Create sound files in a software such as Audacity. Find the name of the auto-generated Sound instance which is created for each line, which is a combination of the character name and line index. Import all sound (.ogg) files. If you have access to .xfa files you may import them too. Move the imported sound files to the LocalizedData/Sound/VO folder.
If you play this conversation using transmissions, each line will be displayed the duration of the imported sound-clip. If you want further breaks you may add Wait actions between the conversation lines.
- Groups of Conversation data
- All Basic items (lines, actions etc.) have to belong to a group.
- A Line defines the text that displays in the subtitle when a Conversation is played. It also defines the facial expression, portrait model movement and mouth movement.
- Text: Define the text content of the line to be displayed. Also defines the facial expression, which is how the face will look (angry, sad etc.) but not mouth movement or head tilt.
- Sound: Define the Sound instance which should be played during this line. If no sound-clip has been imported it is possible to define a specific length of the line. If the sound has a related .xfa file this will determine the mouths movement.
- Animation: Defines which animation the portrait model should be playing. The default animation is Talk which twitches the head, making the eyes look at different targets.
- Camera: (no options)
- Cutscene: used to link a cinematic segment to be played during the line.
- Condition: checks Conversation State arrays. Has nothing to do with the trigger modules conditions.
- Action: Modify Conversation State arrays when this line is being played.
- When you add a line, the editor will create a sound placeholder. If you overwrite the sound file placeholder with your own file, the Transmission will play the sound and display the text of the text of the Line for the duration of your custom sound-file.
- Line colors can be applied to the Lines. This is done by assigning a Character from the Speaker list. It is also possible to color the lines based on sound. Lines that have an imported sound-clip (as described in the auto-generated sound instance section) it will be green. Otherwise it will be colored red.
- Manipulates Conversation states.
- Is not related to actions in the trigger module.
- Checks the status of Conversation states. Conversation states are arrays of states. Conditions can be added to check conversation states before a Conversation State is modified.
- Is not related to conditions in the trigger module.
- Adds a pause in the conversation. When an entire conversation is played using Run Conversation action, this will cause a moment of silence. No Transmission portrait will be displayed during a wait.
- Add jumps to fork your conversations into trees. Conversation states are used for conditional jumping.
- If the Cinematic mode has been enabled using triggers, this can be used to prompt the player for different options.
- Unkown if it works
- A simple comment use to explain the flow of content.
- Does not interfere with the content.
- This is for self-organization only and has no impact on access by other parts of the game.