Master Thesis Research
This project was developed as part of a master thesis research focused on automated kernel tuning using Large Language Models (LLMs).
Research Overview
The thesis explores the application of LLMs for optimizing GPU kernels, investigating various tuning strategies and their effectiveness in different scenarios. The research provides comprehensive analysis of:
Different tuning strategies and their performance characteristics
Comparison of various LLM models and their suitability for kernel optimization
Impact of different parameter settings on tuning effectiveness
Evaluation metrics and benchmarking methodologies
Thesis Document
The complete thesis document is available in the project repository as Master_Thesis.pdf (university mirror). This document contains:
Detailed experimental methodology
Comprehensive comparison between different tuning strategies
Performance analysis and benchmarking results
Insights into the effectiveness of various LLM approaches for kernel optimization
Recommendations for optimal settings and configurations
The thesis provides valuable insights for researchers and practitioners interested in applying machine learning techniques to high-performance computing optimization problems.
Experimental Results
The thesis includes extensive experimental validation of the framework, comparing different approaches across various kernel types and optimization scenarios. These results inform the default settings and recommended practices documented throughout this framework.