Skip to content

Advanced Settings

Fine-tune Intervu's behavior with advanced configuration options.

Overview

Advanced settings affect performance, accuracy, and generation behavior. Most users can leave these at defaults.


Audio Processing

Silence RMS Threshold

Audio below this level is treated as silence and not sent to STT.

  • Default: 0.005
  • Range: 0.001 - 0.1
  • Effect: Lower = more audio sent, Higher = more filtering

When to adjust:

  • Increase if you hear static/noise being transcribed
  • Decrease if quiet speech is being ignored

Audio Chunk Duration

Length of audio chunks sent to STT endpoint.

  • Default: 3 seconds
  • Range: 1 - 10 seconds
  • Effect:
    • Lower = faster initial response, more API calls
    • Higher = fewer API calls, slower response

Recommendation: Keep at default for real-time feel.


Transcript Processing

Hallucination Phrases

Whisper sometimes "hears" phrases in silence. These are filtered out.

  • Default: you,thank you,thanks,thanks for watching,thank you for watching,thanks for listening,thank you for listening,bye,goodbye,the end,so,okay,hmm,uh,um,oh,ah,i
  • Format: Comma-separated list

Adding custom phrases:

you,thank you,thanks,[your custom phrases here]

When to modify:

  • If you see recurring false transcriptions
  • If certain phrases appear when no one is speaking

LLM Generation

Max Tokens

Maximum tokens in generated answers.

  • Default: 0 (unlimited)
  • Range: 0 - 8192 (model dependent)
  • Effect:
    • Lower = shorter, more concise answers
    • Higher = longer, more detailed answers

Recommendation: Keep at default. Use system prompt to control length.

LLM Debounce Seconds

Delay after interviewer speaks before generating an answer.

  • Default: 2 seconds
  • Range: 0 - 10 seconds
  • Effect:
    • Lower = faster response, may interrupt ongoing speech
    • Higher = slower, but more complete questions

Recommendation: Increase if answers are generated before questions are complete.

LLM Thinking Mode

For models that support extended reasoning (Claude, some local models).

  • Default: Off
  • Effect: Model may emit thinking tokens before the answer
  • Use case: Complex questions requiring reasoning

Queue Mode

Controls how multiple questions are handled.

ModeBehavior
Off (Priority)Cancel previous answer, generate for latest question
On (Queue)Queue all questions, answer in order received
  • Default: Off (Priority mode)
  • Use Queue mode when: You want answers to all questions in order

Show Queued Answers

When Queue mode is on, show cards for queued questions.

  • Default: Off
  • Effect: Shows placeholder cardsQuestions waiting for answers
  • Use case: Visual feedback that questions are being processed

Advanced Mode

Dual-LLM pipeline for intelligent question extraction.

What It Does

Normal mode generates answers whenever the interviewer speaks. Advanced mode:

  1. Captures interviewer speech
  2. Uses a second LLM to extract complete questions
  3. Only generates answers for actual questions

Configuration

SettingDescription
Advanced ModeEnable/disable
Extractor EndpointLLM for question extraction (can use same as main LLM)
Extractor ModelModel for extraction
Extractor API KeyAPI key if required

When to Use

  • Enable when: Interviewer talks a lot between questions
  • Disable when: You want answers for every statement

Contextual Answer Tips

Enable structured answer output with key points and confidence.

When Enabled

LLM outputs:

KEY POINTS:
- First key point
- Second key point
- Third key point

CONFIDENCE: 85

ANSWER:
[Full answer text]

This is parsed and displayed as:

  • Bullet points above the answer

  • Confidence meter

  • Default: Off

  • Use case: When you want quick takeaways before reading full answer


FFmpeg Path

Custom FFmpeg binary location.

  • Default (Windows): Bundled FFmpeg (automatically downloaded)
  • Default (macOS): System FFmpeg (must be installed via Homebrew)

When to Change

  • If bundled FFmpeg doesn't work on Windows
  • If you have a custom FFmpeg build
  • If Intervu can't find FFmpeg on macOS

Example Paths

Windows: C:\ffmpeg\bin\ffmpeg.exe
macOS (Intel): /usr/local/bin/ffmpeg
macOS (Apple Silicon): /opt/homebrew/bin/ffmpeg
Linux: /usr/bin/ffmpeg

macOS Installation

FFmpeg is not bundled on macOS. Install it manually:

bash
brew install ffmpeg

See macOS Setup for more details.


Settings Backup

Export Settings

Your settings are stored at:

Windows:

%APPDATA%/intervu/settings.json

macOS:

~/Library/Application Support/intervu/settings.json

You can back up this file to preserve your configuration.

Clear Ratings

In Advanced settings, click Clear Ratings to remove all rated answers (used for in-context learning).


Troubleshooting Advanced Settings

Too Much Noise Being Transcribed

  1. Increase Silence RMS Threshold
  2. Add noise patterns to Hallucination Phrases

Answers Generated Before Question is Complete

  1. Increase LLM Debounce Seconds
  2. Enable Advanced Mode for question extraction

Answers Too Long/Short

  1. Adjust Max Tokens
  2. Modify System Prompt to specify length requirements

Slack Responses

  1. Reduce Audio Chunk Duration
  2. Use a faster LLM model
  3. Check system resources

Next Steps

Made with ❤️by Aldrick Bonaobra