Skip to main content

FAQ and Troubleshooting

Answers to the most common TalkScore Hub questions — missing reports, score concerns, alerts, exports, and when to contact your Talkpush representative.

Written by Miguel Olivares

Common questions about TalkScore Hub, with step-by-step guidance for resolving the most frequent issues. If you don't find what you need here, your Talkpush representative is always the right next stop.


Q: Why didn't a candidate's report appear?

A: Reports appear in the Hub after the AI interview is completed and the scoring pipeline finishes processing. If a report is missing, check the following in order:

  1. Was the call completed? Go to Metrics → General → Call Status Distribution. If the call shows as "Not Taken," the candidate never connected — no report is generated for unanswered calls. If it shows as "Unfinished," the candidate connected but left early. Unfinished calls may or may not generate a report depending on how far the candidate progressed.

  2. Has enough time passed? Reports typically appear within a few minutes of the call ending. If the call just finished, wait 5–10 minutes and refresh the Reports page.

  3. Are your filters hiding it? Check the filters on the Reports page:

    • Is the time period set to include the date of the call?

    • Is the assessment filter set to the correct AI agent?

    • Is the status filter excluding incomplete calls?

    • Is the Test Calls toggle off? If it was a test call, turn this toggle on to see it.

  4. Is it a scoring pipeline issue? Go to Insights → Quality and filter by "Missing Score Completed." If you see a flag for this candidate, the interview completed but scoring failed. Contact your Talkpush representative — they can investigate and reprocess the report.

If none of the above explains the missing report, contact your Talkpush representative with the candidate's name, the date and approximate time of the call, and the assessment name.


Q: A candidate's score looks wrong — what should I check?

A: Before concluding a score is incorrect, review the full candidate report:

  1. Open the candidate report in Reports and read the per-dimension scores and AI reasoning. The AI explains why it assigned each score with direct references to the transcript. Does the reasoning make sense given what the candidate said?

  2. Read the transcript. The transcript is the ground truth. Compare what the candidate actually said against the AI's reasoning. Sometimes a score that looks wrong at first glance is justified by specific transcript evidence.

  3. Listen to the recording. Audio can reveal tone, hesitation, or confidence that the transcript alone doesn't capture. This is especially useful for language proficiency scores.

  4. Check call duration and completion status. A very short call (under 60 seconds) may not have enough content for reliable scoring. If the call was incomplete, the score may be based on limited evidence.

  5. Check for quality flags. Look at the Agent Quality Analysis section at the bottom of the report. If the agent hallucinated job details or had other quality issues during this specific call, the candidate may have responded based on wrong information — which would affect their score.

  6. Submit feedback. If after reviewing you still believe the score is wrong, use the Feedback feature to submit a Score Opinion. Note which specific dimension you think is overscored or underscored, and why. This feedback helps improve scoring accuracy over time.

  7. Contact your Talkpush representative if you see a pattern of incorrect scores across multiple candidates. Include the affected candidate names, the dimensions you believe are mis-scored, and your reasoning.


Q: What does "score compression" mean?

A: Score compression occurs when most candidates receive very similar scores — typically clustering at the top of the scale (e.g., 80%+ of candidates scoring 4 or above). When this happens, the scoring rubric has lost its ability to meaningfully differentiate between candidates.

Score compression is a calibration issue, not a candidate quality issue. It does not mean all your candidates are performing at the same level — it means the scoring criteria are too lenient, making it easy for most candidates to qualify for high scores.

How to identify score compression

  • Go to Score Calibration and check the Score Distribution chart. If most scores cluster at 4–5, compression is likely present.

  • Check the Soft Skill Averages section. Any dimension with an average above 4.0 and very low standard deviation is probably compressed.

  • Go to Metrics → Outcome Analysis and compare the TalkScore Distribution for hired vs. not-hired candidates. If both groups have similar score distributions, the score isn't helping differentiate.

What to do

Contact your Talkpush representative. They will review and tighten the rubric criteria so that high scores require more specific, detailed evidence from candidates. You do not need to make any changes yourself — rubric adjustments are handled by the Talkpush team.


Q: What should I do when I see a Critical alert?

A: Critical alerts require immediate attention. Here's the step-by-step response:

  1. Read the alert details. Go to Insights → Health (or check the Alerts tab) and note:

    • The alert type (e.g., bias flag spike, hallucination spike)

    • The affected agent(s)

    • The time period

    • The number of affected conversations

  2. Review the flagged reports. Go to Insights → Quality and filter by the relevant issue type and time period. Read the explanations for each flag to understand the pattern.

  3. Contact your Talkpush representative immediately. Include:

    • The alert type and severity

    • The affected agent name(s)

    • The time period

    • The pattern you observed (e.g., "all hallucination flags are reading placeholder values" or "bias flags are related to age")

  4. Do not dismiss the alert without investigation. Even if a Critical alert looks like a false positive, it needs professional review.

  5. Do not attempt to fix the issue yourself by changing agent configuration. Critical alerts often require coordinated changes to system prompts, rubrics, or data sources that should be handled by the Talkpush team.

  6. Do not tell candidates about quality flags or alerts.

  7. Monitor after the fix. Once your Talkpush representative confirms changes have been made, check the Quality log over the following days to verify the flags stop appearing.

Special note on bias alerts

Bias flags are always Critical severity. The nine types of bias monitored are: gender, racial, age, socioeconomic, disability, cultural/nationality, language proficiency, accent/dialect, and interview format bias. Even a single bias flag warrants investigation — do not assume it is a false positive.


Q: Can we change the questions in an assessment?

A: Assessment questions, rubrics, and agent configurations are not editable through TalkScore Hub. The Hub is a monitoring and analytics platform — it shows you how your assessments are performing, but configuration changes are made separately.

To request changes to assessment questions, scoring rubrics, agent behavior, or any other configuration:

  1. Document what you want to change and why. The more specific you are, the faster the change can be made. For example: "Question 3 about scheduling flexibility has a 15% dropout rate — we'd like to rephrase it" is more actionable than "we want to change some questions."

  2. Contact your Talkpush representative with your request. Include:

    • The assessment name

    • The specific change you want (new questions, reworded questions, different rubric criteria, etc.)

    • Any data from the Hub that supports the change (dropout rates, score compression, quality flags)

  3. The Talkpush team will implement and test the changes. After the update goes live, monitor the Hub to confirm the change had the intended effect (e.g., reduced dropout, better score differentiation).


Q: Why does the same candidate have two reports?

A: A candidate can have multiple reports if they went through more than one AI interview. Common reasons:

  • Rescheduled interview. The candidate was rescheduled and completed a second call. Both the original (incomplete) and the new (completed) interview generate separate reports.

  • Multiple assessments. The candidate applied to more than one role or campaign, each with its own AI interview.

  • Retry after technical failure. If a call had technical problems and the candidate was given another attempt, both calls appear as separate reports.

  • Test calls. If someone ran test calls using the candidate's information, those may appear alongside the real interview. Use the Test Calls toggle on the Reports page to show or hide test data.

To identify which report is the definitive one, check the date, duration, and completion status of each report. The most recent completed call with a full-length duration is typically the one to use for hiring decisions.

If you believe duplicate reports are appearing incorrectly (e.g., the same single call generated two reports), contact your Talkpush representative with the candidate name and the dates of both reports.


Q: How do I export data from TalkScore Hub?

A: TalkScore Hub offers two ways to export data:

Export candidate reports

Go to Reports, apply your desired filters (time period, assessment, campaign, status), and click the Export CSV button. This downloads a spreadsheet with all candidates matching your filters, including their scores, CEFR levels, dates, and completion status.

Export a Snapshot summary

Go to the Snapshot dashboard and click Generate Report in the top-right corner. This creates a downloadable summary of your dashboard data — KPI cards, agent health status, and key trends — formatted for sharing with stakeholders who don't have Hub access.

What's included in each export

Export type

Includes

Reports CSV

Candidate name, email, overall score, per-dimension scores, CEFR level, date, duration, completion status, assessment name, campaign

Snapshot Report

KPI summary (calls started, completion rate, avg score, std deviation), agent health table, period-over-period comparisons

Tips

  • Apply filters before exporting to get only the data you need.

  • For large date ranges, the CSV export may take a moment to generate.

  • The Snapshot report reflects whatever time filter is currently active on the dashboard.


Q: Who should I contact for help?

A: For any issues, questions, or change requests related to TalkScore Hub, contact your Talkpush representative. They are your single point of contact and will route your request to the appropriate team.

When to reach out

Situation

What to include in your message

Critical alert triggered

Alert type, affected agent, time period, pattern observed

Score compression detected

Score distribution data, affected assessment, time period

Hallucination flags appearing

Number of flags, affected agent, pattern from Quality log

Score looks incorrect

Candidate name, which dimension seems wrong, your reasoning

Want to change assessment questions or rubric

Assessment name, specific change requested, supporting Hub data

Missing reports

Candidate name, expected call date/time, assessment name

Completion rate dropping

Assessment name, time period, dropout stage data

General questions about the Hub

Description of what you're trying to do or understand

Best practices for support requests

  • Be specific. Include assessment names, date ranges, and candidate names when relevant.

  • Include Hub data. Screenshots or data from the relevant Hub screens help the team diagnose issues faster.

  • Note the urgency. Critical alerts and bias flags should be flagged as urgent. Routine questions and feature requests can follow normal timelines.

  • Don't attempt configuration changes yourself. Rubric adjustments, system prompt changes, and agent configuration modifications are handled by the Talkpush team to ensure quality and consistency.


Did this answer your question?