1. Oracle Cloud Infrastructure (OCI) Documentation: In the "Best Practices for Prompts" for the Generative AI service, it states, "Chain-of-thought (CoT) prompting enables complex reasoning capabilities through a series of intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before the response."
Source: Oracle Cloud Infrastructure Documentation, Generative AI, "Best Practices for Prompts," Section: "Chain-of-Thought Prompting."
2. Peer-Reviewed Academic Publication: The foundational paper on the topic defines the technique as a method to encourage the model to generate a series of intermediate steps. The abstract states, "...we explore chain-of-thought prompting as a simple and broadly applicable method for enhancing reasoning in language models... chain-of-thought prompting allows models to decompose multi-step problems into intermediate steps..."
Source: Wei, J., Wang, X., Schuurmans, D., et al. (2022). "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." Advances in Neural Information Processing Systems, 35, 24824-24837. (Available via NeurIPS Proceedings).
3. University Courseware: Stanford University's course on LLMs describes CoT as a method to improve performance on complex reasoning tasks by prompting the model to generate intermediate reasoning steps.
Source: Stanford University, CS324: Large Language Models, Winter 2024, Lecture 10: "Prompting, Instruction Tuning, and RLHF," Section on Chain-of-Thought.