Home » #Technology
The rise of artificial intelligence has sparked debate across the tech world: is traditional coding becoming obsolete? With tools like ChatGPT, GitHub Copilot, and Replit Ghostwriter writing code at lightning speed, some believe that AI will soon replace human developers. With over 20 years of tech industry experience and a Computer Science graduate who has…
Every tech enthusiast dreams of owning a high-performance machine that can handle the most demanding workloads. For me, as a CSE graduate from NIT Rourkela and 20+ year in tech industry, this dream became a passion project—one that would test my technical skills, patience, and love for hardware. I had multiple options to buy a…
If you’re working with Hugging Face’s transformers and peft libraries on Windows, you’ve likely seen messages or warnings related to model caching, symlinks, and environment variables. This guide demystifies how Hugging Face handles model storage, how to change the cache locations, and how to resolve common issues — especially on Windows. What Is Model Caching…
For more than 20 years, I’ve been immersed in the ever-evolving world of technology—writing millions of lines of code, scaling products, and leading digital transformation initiatives that fueled tremendous business growth. My journey has taken me through some of the finest tech ecosystems, including Erevmax, TravelGuru, Nagarro, and PeopleStrong HR Technologies. After stepping back from…
Companies today are drowning in policy documents, employee handbooks, and compliance guidelines—but finding specific answers quickly remains a challenge. What if employees could simply ask questions in natural language and get accurate, instant responses from an AI trained on your exact documents? In my 20-year tech career, I’ve been a catalyst for innovation, architecting scalable…
In today’s fast-paced corporate environment, employees often have questions about company policies—from attendance rules to leave entitlements and codes of conduct. While traditional intranets and HR portals provide static information, generative AI offers a more interactive way to access policy information. For over 20 years, I’ve been building the future of tech, from writing millions…
AI continues to revolutionize how we solve complex problems, and model fine-tuning plays a key role in this transformation. Whether you’re building smarter chatbots, domain-specific vision models, or personalized LLMs, fine-tuning lets you customize powerful pretrained models with significantly fewer resources. Over the last 20 years, I’ve gone beyond coding mastery—championing strategic leadership that propels…
Fine-tuning large language models has revolutionized natural language processing (NLP) by allowing us to adapt powerful pretrained models to specific use cases. Whether you’re building a domain-specific chatbot, sentiment classifier, or text summarizer, fine-tuning helps bridge the gap between generic language understanding and task-specific performance. For over two decades, I’ve gone from crafting millions of…
As AI continues its rapid evolution, the demand for faster, lighter, and smarter model customization is at an all-time high. Fine-tuning has emerged as a go-to strategy to adapt pretrained models to specific domains or tasks without starting from scratch. For over 20 years, I have led transformative initiatives that ignite innovation, build scalable solutions.…
As AI adoption skyrockets across industries, selecting the right GPU becomes a critical success factor. NVIDIA’s RTX 50 Series, powered by the groundbreaking Blackwell architecture, delivers versatile and powerful GPUs optimised for a wide range of AI workloads — from fast inference to efficient fine-tuning and limited full model training. For over 20 years, I’ve…
As AI continues to reshape industries, choosing the right GPU is no longer a luxury—it’s a strategic necessity. NVIDIA’s RTX 40 Series, built on the Ada Lovelace architecture, delivers next-generation power for developers, startups, and AI enthusiasts looking to scale inference, fine-tune large models, and even train them from scratch. With over 20 years in…
Artificial intelligence is evolving beyond traditional static models. To stay ahead, AI systems must continuously learn, adapt, and optimize their performance. Techniques such as active learning, A/B testing, adaptive learning, and real-time inference enable AI to become more efficient, data-driven, and responsive to changing conditions. This tech concept, explores how these techniques enhance AI-driven applications and provides hands-on implementation with…
Recommendation systems drive personalized experiences across industries. From e-commerce platforms suggesting products to streaming services curating content, AI-powered recommendation engines significantly enhance user engagement and retention. For over two decades, I’ve been igniting change and delivering scalable tech solutions that elevate organisations to new heights. My expertise transforms challenges into opportunities, inspiring businesses to thrive…
In real-world machine learning (ML) applications, models need to be continuously updated with new data to maintain high accuracy and relevance. Static models degrade over time as new patterns emerge in data. Instead of retraining models from scratch, incremental learning (online learning) enables models to update using only new data, making the process more efficient. This tech…
In real-world machine learning (ML) applications, models need to be continuously updated with new data to maintain high accuracy and relevance. Static models degrade over time as new patterns emerge in data. To address this, ML pipelines can be designed for continuous training, ensuring that models evolve based on fresh data. This tech concept will…
Machine Learning (ML) has revolutionized various industries by enabling accurate predictions based on data patterns. In this tech concept, we will walk through the process of building an end-to-end ML pipeline that showcases how predictions work. The pipeline will cover data collection, preprocessing, model training, evaluation, saving the model, and deployment. In my 20-year tech…
Hyperparameter tuning is essential for achieving optimal performance in machine learning and deep learning models. However, traditional methods like grid search and random search can be inefficient, especially for computationally expensive models. This is where Hyperband and Successive Halving come in. These advanced tuning techniques dynamically allocate resources (such as training epochs) to promising configurations while eliminating underperforming ones…
Machine learning models perform best when their hyperparameters are fine-tuned for the given dataset. Traditional grid search and random search methods are widely used, but they struggle with complex, high-dimensional search spaces. Enter genetic algorithms (GAs)—a technique inspired by natural selection that iteratively evolves better hyperparameter combinations over multiple generations. In this tech concept, we explore…
Machine learning models make various types of predictions beyond just continuous (regression) and discrete (classification). While these two are the most well-known, modern AI applications require more nuanced predictive capabilities. This tech concept explores four additional types: probabilistic, ranking, multi-label, and sequence predictions. For ~20 years now, I’ve been building the future of tech, from…
When building machine learning models, understanding the difference between continuous and discrete predictions is crucial. These two types of predictions determine whether you need a regression or classification model. In this tech concept, we’ll explain how continuous and discrete predictions work, their key differences, and real-world applications—along with Python code examples. For two decades now,…