evaluating-code-models | skill guide | OpenClaw Study

Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing co…

Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing codi...

This page belongs to the OpenClaw Skills learning hub with install guides, category navigation, and practical links.

简体中文 繁體中文 日本語 Español Português