llm_kernel_tuner

Basic usage

  • Getting started
  • Using different LLM models
  • Setting tuning strategy parameters
  • Tuning strategies

Advanced usage

  • Custom tuning strategy
  • Testing strategies
  • Retry Policy
  • Structured Output
  • Performance Tracking

Reference

  • API documentation
  • Prompts documentation
  • Master Thesis Research
llm_kernel_tuner
  • LLM Kernel Tuner documentation
  • Edit on GitHub

LLM Kernel Tuner documentation

Basic usage

  • Getting started
  • Using different LLM models
    • Handling Structured Output
    • OpenAI
    • Anthropic
    • llama.cpp with Python Bindings
    • vLLM
  • Setting tuning strategy parameters
    • Autonomous Strategy examples
    • Explicit Strategy example
  • Tuning strategies
    • One Prompt Tuning Strategy
    • Autonomous Tuning Strategy
    • Explicit Tuning Strategy

Advanced usage

  • Custom tuning strategy
  • Testing strategies
    • Tests
    • Naive Testing Strategy
    • Custom Testing Strategy
  • Retry Policy
    • Overview
    • Basic Usage
    • Error Handling
    • Creating a Retry Policy
    • Direct Function Wrapping
    • Integrating with LangGraph Subgraphs
    • Testing Our Retry System
    • Complete Example
  • Structured Output
    • Why is this needed?
    • The get_structured_llm function and StructuredOutputType Enum
    • How it works
    • Example Usage
  • Performance Tracking
    • Return Values
    • PerformanceTracker Features
    • Performance Overview Display
    • Example Usage
    • PerformanceStep Details
    • Integration with Existing Code

Reference

  • API documentation
  • Prompts documentation
  • Master Thesis Research
Next

© Copyright 2025, Nikita Zelenskis.

Built with Sphinx using a theme provided by Read the Docs.