Start New Evaluation
💡 Tip: Enter a URL or click on the input field to see popular model suggestions. Click "Auto-Analyze & Start Evaluation" to begin.
Evaluation Leaderboard
| Rank | Model | Evals | Score | Tier | T | R | U | E | Date |
|---|
Transparent • Reproducible • Understandable • Executable
✅ Evaluate the openness and reproducibility of open LLMs 📊
💡 Tip: Enter a URL or click on the input field to see popular model suggestions. Click "Auto-Analyze & Start Evaluation" to begin.
| Rank | Model | Evals | Score | Tier | T | R | U | E | Date |
|---|
Models are classified into four tiers based on their total TRUE Framework score (maximum 30 points):
28-30 points
21-27 points
11-20 points
0-10 points
Accuracy Disclaimer: The TRUE Framework scores are based on publicly available information and may not reflect the complete picture of a model's openness. Scores generated by this tool should be considered preliminary assessments.
Verification Required: Please independently verify all evidence links and information before making decisions based on these evaluations. Model documentation and availability may change over time.
Use Case: This tool is intended for educational and research purposes to promote transparency in AI development. For critical decisions, conduct thorough independent research.