Vincent Granville's picture

Vincent Granville PRO

vincentg64

AI & ML interests

GenAI, LLM, synthetic data, optimization, fine-tuning, model evaluation

Recent Activity

posted an update 1 day ago
Stay Ahead of AI Risks - Free Live Session for Tech Leaders Exclusive working session about trustworthy AI, for senior tech leaders. Register at https://lu.ma/zrxsvy6c ​AI isn’t slowing down, but poorly planned AI adoption will slow you down. Hallucinations, security risks, bloated compute costs, and “black box” outputs are already tripping up top teams, burning budgets, and eroding trust. That’s why this session blends three things you can’t get from a typical AI webinar: ​- Practical expertise: GenAI pioneer Vincent Granville will share a real-world framework for deploying hallucination-free, secure, and lightweight AI, without endless vendor contracts or GPU farms. ​- Candid Q&A: Get direct answers from Vincent and your peers in an open discussion, so you leave with clarity on the challenges that matter most to you. ​➡️ What You’ll Get in 60 Minutes: ​- 20-min Expert Briefing — actionable principles and architectures from Vincent Granville. - ​25-min Facilitated Working Session — collaborate with fellow tech leaders, guided by Sidebar facilitators, to share hard-won lessons and leave with peer-tested solutions. - ​15-min Q&A — bring your biggest questions and walk away with clear, practical guidance. ​➡️ Why this matters for you: ​- Protect your org from costly AI mistakes others are already making. - ​Stay credible in the C-suite with clear, confident AI strategy. - ​Move faster than competitors without sacrificing security, trust, or control. ​➡️ About the Speaker Vincent Granville — GenAI scientist and co-founder of BondingAI.io, building secure, hallucination-free LLMs for enterprise. Former senior leader at Visa, Microsoft, eBay, NBC, and Wells Fargo. Author with Elsevier and Wiley. Post-doc in computational statistics from the University of Cambridge. Successful startup exit.
posted an update 3 months ago
A New Type of Non-Standard High Performance DNN with Remarkable Stability – https://mltblog.com/3SA3OJ1 I explore deep neural networks (DNNs) starting from the foundations, introducing a new type of architecture, as much different from machine learning than it is from traditional AI. The original adaptive loss function introduced here for the first time, leads to spectacular performance improvements via a mechanism called equalization. To accurately approximate any response, rather than connecting neurons with linear combinations and activation between layers, I use non-linear functions without activation, reducing the number of parameters, leading to explainability, easier fine tune, and faster training. The adaptive equalizer – a dynamical subsystem of its own – eliminates the linear part of the model, focusing on higher order interactions to accelerate convergence. One example involves the Riemann zeta function. I exploit its well-known universality property to approximate any response. My system also handles singularities to deal with rare events or fraud detection. The loss function can be nowhere differentiable such as a Brownian motion. Many of the new discoveries are applicable to standard DNNs. Built from scratch, the Python code does not rely on any library other than Numpy. In particular, I do not use PyTorch, TensorFlow or Keras. ➡️ The PDF with many illustrations is available as paper 55, at https://mltblog.com/3EQd2cA. It also features the replicable Python code (with link to GitHub), the data generated by the code, the theory, and various options including for evaluation.
View all activity

Organizations

None yet