Skip to main content
Not all responses are created equal. Deepfield helps you assess response quality so you can focus on high-quality data for your analysis.

Why Quality Matters

High-quality responses:
  • Provide accurate, thoughtful answers
  • Contribute meaningful insights
  • Make analysis more reliable
Low-quality responses:
  • May contain random or rushed answers
  • Can skew your results
  • Add noise to your data

Quality Scoring

Deepfield automatically assesses response quality based on several factors.

What’s Evaluated

FactorWhat It Measures
Completion timeWas the study completed in a reasonable time?
Response lengthAre open-ended answers sufficiently detailed?
ConsistencyDo answers make logical sense together?
EngagementDo responses show thoughtful engagement?
Audio/Video qualityIs media clear and audible?

Quality Indicators

Responses may be flagged for:
  • Speeding: Completed unusually fast
  • Straightlining: Same answer for all matrix questions
  • Gibberish: Nonsensical open-ended responses
  • Poor media: Inaudible or unclear recordings

Viewing Quality Scores

In the Response Table

The response table shows a quality indicator for each response:
  • High quality: Meets all quality standards
  • Medium quality: Some concerns but usable
  • Low quality: Significant quality issues

Individual Response Details

Click on any response to see:
  • Overall quality score
  • Specific quality flags
  • Details about any issues detected

Quality Factors Explained

Completion Time

Too fast: Participant may have rushed through without reading
  • Typical flag: Completed in less than 1/3 of average time
Too slow: Participant may have been distracted
  • Less concerning than speeding
  • May indicate thoughtful responses

Open-Ended Response Length

Too short: Brief answers that don’t provide insight
  • Example: “good” or “idk”
  • Misses the value of qualitative questions
Ideal: Responses that answer the question with some detail
  • Complete sentences
  • Specific examples or explanations

Straightlining

When participants select the same option for every row in a matrix:
  • May indicate disengagement
  • Could be valid if all items truly rate the same
  • Review in context

Consistency

AI checks if answers make logical sense:
  • Does Q5 answer align with Q3?
  • Are there contradictions?
  • Do responses tell a coherent story?

Media Quality

For video and audio responses:
  • Is audio clear and understandable?
  • Is video properly recorded?
  • Can the response be transcribed?

Managing Quality Issues

Review Low-Quality Responses

Before including in analysis:
  1. Filter to show low-quality responses
  2. Review each one individually
  3. Decide whether to include or exclude

Exclude Problematic Responses

Options for handling poor quality:
  • Exclude from analysis: Don’t include in reports
  • Flag for manual review: Review before deciding
  • Include with caution: Use but note the limitation

Replace Low-Quality Responses

If quality issues are significant:
  • Consider recruiting additional participants
  • Replace unusable responses with new ones
  • Update your quality criteria for future studies

Improving Response Quality

At the Study Design Stage

Keep it reasonable length. Long studies lead to fatigue and lower quality responses.
Write clear questions. Confusing questions get confusing answers.
Mix question types. Variety keeps participants engaged.

At the Recruitment Stage

Target the right audience. Engaged, relevant participants give better responses.
Set expectations. Let participants know what’s involved.

At the Collection Stage

Monitor early. Check the first responses for quality issues.
Act quickly. Address problems before collecting many bad responses.

Quality in Analysis

Filtering for Analysis

When generating reports, you can:
  • Include only high-quality responses
  • Set quality thresholds
  • Exclude flagged responses

Reporting on Quality

Your analysis may note:
  • Total responses collected
  • Responses meeting quality standards
  • Any exclusions and reasons

Common Quality Questions

Generally 80-95% of responses meet quality standards. Below 70% may indicate study issues.
Review them first. Some may still contain valuable insights. Others should definitely be excluded.
Possible causes: Study too long, questions confusing, wrong audience, poor incentive alignment.
Limited options once launched. You can recruit more participants or adjust criteria for new responses.

Quality Checklist

Before analysis, verify:
  • Reviewed overall quality distribution
  • Checked low-quality responses individually
  • Decided on inclusion/exclusion criteria
  • Documented any quality-related decisions
  • Have sufficient high-quality responses for analysis

Next Steps