Deep dive into adaptive prediction intervals, exploring methodologies and benchmarking techniques.
A comprehensive comparison of in-house and API-based LLM solutions, covering implementation challenges and best practices.
Understanding the core principles and architecture of AI agent systems, from their history to modern implementations and evaluation methods.
Deep dive into uncertainty quantification methods in machine learning, covering proper prediction uncertainty estimation, point predictions limitations, and implementation techniques.
A comprehensive exploration of reinforcement learning's impact on modern language models, covering RLHF, DPO, and future perspectives.
Deep dive into the internals of transformer architecture, including step-by-step implementation and optimization techniques.
Deep dive into modern computer vision model architectures, from data preparation to deployment, with hands-on implementation of vision transformers.
Deep dive into synthetic data generation for LLM training and the growing importance of small language models in practical applications.
The talk explored LLMs' core principles, covering Prompt Engineering, RAG, Fine-Tuning, System Design, and evaluation methods for performance, reliability, and ethics.
Explored Operational Research, optimization solvers, and Data Envelopment Analysis for complex decision-making optimization.
Discussion on adapting open-source LLMs for Georgian language through tokenizer transfer and continual pretraining.
Review of causal inference methods beyond conventional correlation techniques for answering 'What if' questions.
Discussion of time series forecasting algorithms, benchmarks, common pitfalls, and best practices.