Writing on AI and our Future

Some of my long-form thoughts.

Insights from NeurIPS 2024

Reflections on the future of AI from NeurIPS 2024, covering inference-time compute, unconditional generation, hardware efficiency breakthroughs, and the evolving relationship between AI and creativity.

An Intuitive Explanation of SGD vs Gradient Descent

An intuitive exploration of why Stochastic Gradient Descent often outperforms traditional gradient descent in machine learning optimization, from data efficiency, a focus on progress, and leveraging randomness.

Auditing Stable Diffusion with Perplexity

I used prompt engineering and RAG to have Perplexity's Default LLM design a process for a demographics fairness audit of the Stable Diffusion v2.1 text-to-image model.

Generative AI as an Ethical Theorist

I used prompt engineering and RAG to get LLMs to pose as ethical theorists and react to a contemporary AI ethics dilemma, the potential adoption of social robots within public facilities for vulnerable and special needs children.

Optical Pairing: Streaming K-Means for Zero-Shot IoT Communication

A patented computer vision and online machine learning algorithm enabling zero-shot optical communication from IoT devices to arbitrary mobile phones, robust to challenging outdoor lighting conditions and distances up to 25ft.