Thought-Based Reasoning Techniques
Guide to 8 LLM reasoning techniques: Chain-of-Thought, Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, and Reflexion with research citations
What is it?
Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.
How to use it?
Install this skill in your Claude environment to enhance thought-based reasoning techniques capabilities. Once installed, Claude will automatically apply the skill's guidelines when relevant tasks are detected. You can also explicitly invoke it by referencing its name in your prompts.
The full source and documentation is available on GitHub.
Key Features
- Guide to 8 LLM reasoning techniques: Chain-of-Thought, Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, and Reflexion with research citations
- Deep research and information synthesis
- Seamless integration with Claude's development workflow
Related Skills
More from AI & MLprompt-engineering
Teaches well-known prompt engineering techniques and patterns, including Anthropic best practices and agent persuasion principles
Vectorize MCP Worker
Edge-native MCP server patterns for production RAG
Context Engineering
Context engineering techniques