Writing on AI and our Future

Some of my long-form thoughts.

An Intuitive Explanation of SGD vs Gradient Descent

An intuitive exploration of why Stochastic Gradient Descent often outperforms traditional gradient descent in machine learning optimization, from data efficiency, a focus on progress, and leveraging randomness.

Auditing Stable Diffusion with Perplexity

I used prompt engineering and RAG to have Perplexity's Default LLM design a process for a demographics fairness audit of the Stable Diffusion v2.1 text-to-image model.

Generative AI as an Ethical Theorist

I used prompt engineering and RAG to get LLMs to pose as ethical theorists and react to a contemporary AI ethics dilemma, the potential adoption of social robots within public facilities for vulnerable and special needs children.