About
He is involved in researching methods for dimensionality reduction, high-dimensional data visualization, and machine learning, with a focus on GPU-accelerated computing using NVIDIA's latest architectures.
His work includes developing distributed training approaches for large language models and fine-tuning techniques for domain-specific applications.
Additionally, he utilizes high-performance computing infrastructure to optimize model training and inference pipelines, particularly for large-scale language models and multimodal systems.