Back to Home
Version 3.0

Constitutional Research Framework

The governing principles that ensure all research is verifiable, honest, and defensible.

Why "Constitutional"?

Like a constitution for a nation, these principles are inviolable. They cannot be broken regardless of convenience, time pressure, or the desire to appear more complete.

This framework emerged from a failed PR submission where fabricated data and unverified claims damaged research credibility. These principles prevent that from happening again.

The Seven Articles

1

Truth Over Convenience

NEVER fabricate, infer, or assume data.

If information cannot be verified, mark it as UNKNOWN.

"No data available" is always preferable to invented data.

2

Source Everything

Every claim must have a traceable source.

No source = No claim.

Sources must be specific (URL, document name, timestamp)

not vague ("various sources").

3

Confidence Scoring

All data points receive a confidence score from 0.0 to 1.0:

1.0: Direct from official source, easily verifiable
0.8-0.9: Multiple independent sources agree
0.6-0.7: Single reliable source
0.4-0.5: Secondary source or aged data
0.0-0.3: Unverified or questionable
4

Temporal Awareness

All research is timestamped.

Data older than 90 days should be flagged for review.

Metrics (stars, TVL, users) require refresh dates.

Historical data is labeled as historical.

5

Primary Source Priority

Source hierarchy (highest to lowest reliability):

  1. Tier 1: Official website, GitHub, verified docs
  2. Tier 2: Official social media, press releases
  3. Tier 3: CoinGecko, DeFiLlama, aggregators
  4. Tier 4: News articles, third-party reviews
  5. Tier 5: Community wikis, forums
6

Scope Boundaries

Research what exists, not what might exist.

Do not speculate about future features.

Do not assume unstated relationships.

Do not extrapolate from incomplete data.

7

Honest Gap Reporting

Document what you COULD NOT find.

Missing data is valuable information.

"Team information not publicly disclosed" tells the reader something important.

Never hide gaps or pretend completeness.

Understanding "Research Data Quality"

Every report includes a Research Data Quality score. This rates our confidence in our own research, not an assessment of the project itself.

What it measures:

  • Source diversity and quality
  • Data freshness
  • Cross-verification success
  • Gap completeness

What it does NOT measure:

  • Project quality
  • Security posture
  • Investment worthiness
  • Technical merit

Quality Gates

Before any research is marked complete, it must pass these gates:

Gate 1: No Fabrication

  • Every data point has a specific source
  • No placeholder text remains
  • "Unknown" used where unavailable

Gate 2: Confidence Verified

  • All scores are justified
  • No 1.0 without official source
  • Low-confidence items flagged

Gate 3: Temporal Integrity

  • Timestamps are accurate
  • Refresh dates noted for metrics
  • Historical vs current distinguished

Gate 4: Gap Honesty

  • Missing data documented
  • Gaps explain WHY missing
  • No false completeness

OSINT Assessment Methodology

For projects receiving OPSEC/OSINT analysis, we use exclusively passive, non-invasive methods:

Data Sources Used

  • Shodan - Infrastructure scanning
  • crt.sh - Certificate transparency
  • DNS resolution - Subdomain mapping
  • HTTP headers - Security configuration
  • GitHub API - Repository analysis

NOT Performed

  • Active exploitation
  • Unauthorized access attempts
  • Social engineering
  • Penetration testing
  • Vulnerability exploitation

Version History

v3.02026-01Added OSINT methodology, clarified "Research Data Quality" scoring
v2.02025-01Added confidence scoring, gap reporting requirements
v1.02024-12Initial framework following PR disaster

See the framework in action