Today, a few quick notes on aspects of artificial intelligence (AI), specifically synthetic media, so-called ‘deepfakes’.
Programming note: An unscheduled absence from me the last two weeks, for which, my apologies. Financial quarter-end hit and the QE piece became dangerously long; it remains, as yet, unfinished.
Disclaimer: As always the usual disclaimers apply. All posts are solely my own opinion. As such, they should not be construed as investment advice, nor do the opinions expressed reflect the views of my employer. The latter point is particularly relevant today.
Before starting this piece, I would like to very clearly state that I am in no way an AI practitioner nor do I claim to possess any degree of expertise in the subject.
Moreover, I would like to acknowledge that artificial intelligence is very much ‘the current thing’ and that I have quite literally nothing of value nor anything new to share on the topic. And yet, here we are.
Let’s begin.
Amara’s law
I am a keen observer of progress in machine learning and generative AI. I regularly catch up with several friends who work either in businesses at the forefront of developing and deploying AI or who work on the research side. This is part of my vague strategy to keep a handle on what is happening. Speaking to them, my lasting impression is that, even in their eyes, the rate of progress is stupefying.
In my first post, I explained that a key motivation behind this Substack was the sense of urgency that I felt in the face of technological developments. My overwhelming intuition was that I needed to push out my ideas while there was still time to credibly claim them as my own, not fully knowing what would come next.
After reflecting further this week, I think we are now there, past that point of no return; the toothpaste is out the tube. In all honesty, the toothpaste has been out of the tube for a while and is probably starting to dry. But it is only recently, particularly following the release of various consumer-facing AI applications, that the degree of progress has really come into public view.
What’s most remarkable is that most people did not seem to believe that we would be able to squeeze the toothpaste out at all. In fact, I’m not even sure that most of the people doing the squeezing thought that it would be possible, at least not this quickly. The metaphor, I admit, is now stretched. But here we are, covered in toothpaste.
Between Stable Diffusion’s model, the proliferation of synthetic media, and the release of OpenAI’s ChatGPT, it is difficult now to deny that the emergent properties of artificial intelligence are consequential. Progress in the space could end tomorrow and we would still have a decade or more of new commercial applications to explore.
That said, it seems likely that Amara’s Law applies here too: we tend to overestimate the effect of a technology in the short run and underestimate its effect in the long run.
It can, at times, seem as though everyone everywhere is evangelising about artificial intelligence. This has shades of crypto and bitcoin, and I admit that that is disconcerting. No doubt, there will be plenty of charlatans. All too often, one sees AI substituted in by a credulous CEO as a synonym to describe any form of advanced technology. Another worrying signal is that the exuberance surrounding the impact of AI has helped drive US equity markets towards all-time highs.
So, in the short term, some of the current excitement around AI may prove to be unfounded. But, in my view, at least over the medium to long term, the emergence of generative and other deep machine learning artificial intelligences will prove to be a watershed moment.
What is artificial intelligence?

What is AI? This question is somewhat daft. But I am also rather daft and, therefore, inclined to address it.
As far as I am concerned, artificial intelligence is a very generic term used to refer to any computational system that simulates an aspect of human intelligence without direct human instruction. Well, that is what it means today. But through the years its meaning has evolved significantly.
At some point in the past, say 70 years ago, artificial intelligence referred to any machine that could complete a human task. In the 1950s, I suspect that an electronic washing machine would have been considered by most experts as artificially intelligent
As technologies improved, AI began to refer programmes that could be taught rules and derive their own outputs, like a chess-playing robots that would be taught the rules of chess and devise their own strategies. This could be viewed, perhaps, as a very narrow form of machine learning.
More recently, artificial intelligence has connoted a deeper form of machine learning where a model is trained on a set of data and learns to make decisions based on that training dataset. Think of a cleaning robot, a Roomba, where the data is your home’s 3D dimensions. It trains by exploring the house and finds an efficient way to clean up.
Right now, artificial intelligence seems to refer primarily to generative AI, models that train themselves (often unsupervised) on a dataset and are then able to generate outputs that can be adapted for different modalities. Remarkably, these generative models express emergent properties - they can adapt to tasks and contexts that they were not designed to complete.
So what is AI? Perhaps artificial intelligence simply refers to the most cutting-edge technology at a given point in time, or the next thing that technologies might be able to do. It is a shape-shifting, forward-looking sort of term.
How I use AI in my day-to-day
By and large, my main use cases for artificial intelligence in my own personal capacity (at least as far as I am consciously aware) are as follows:
Translating texts from one language to another (DeepL)
Producing images e.g. illustrating songs or blogs (Midjourney, DALL-E)
Co-piloting when writing code (ChatGPT, GitHub Copilot)
Generating ideas (ChatGPT)
As of this week, I’ll add another:
Creating deepfakes (open source code)
How I deepfaked my boss
Earlier this week I was talking to a colleague who asked whether it was possible create a deepfake of someone that we knew.
Sure, why not, I thought. Deepfake technology is very well established.
Prove it then, they said…
Later that evening, just after midnight, I found myself in bed, sat in front of my laptop, watching an AI model that I had fed with data inputs run thousands of lines of code.
My victim was the Chief Investment Officer at my firm. My method was to use only open-source materials, such that anyone in the world with an internet connection could do the very same thing that I was doing.
I set out expecting the key challenge to be the availability of a large quantity of data inputs. As it transpired, the key to this challenge was simply finding very few, extremely high quality inputs.
I began by searching for public domain footage of the individual and found an interview that they gave with Bloomberg News. I sampled their voice 3 times, using 90 seconds of audio in total. Then, I recorded 20 seconds of the facial movements.
Using a voice synthesis model, I managed to clone the individual’s voice. It took a while to weight the parameters in such a way that I was able to recreate their particular accent and the idiosyncrasies of their dialect. I then wrote my script.
Finally, using just a single open-source code model, I mapped the cloned voice to the face of the individual and left the model to run. The original output was improved by reducing the resolution of the video and broadening the focus of the model to include the chin area.
Having succeeded, I showed both my colleague and the subject of the deepfake. I asked their permission to share the clone in this limited instance.
So, here is the original:
And here is my clone:
In less than an hour I had created a deepfake, using only publicly available media and code.
I am still slightly taken aback by quite how straightforward it was to create a fairly convincing piece of synthetic media.
Admittedly, even after numerous adjustments, the mouth still glitches out towards the end. But I find this is rather charming, given that it coincides with the clone revealing that they are a deepfake.
Closing thoughts
There is so much more I would like to say on this subject this week, but I have a horrendous amount of other work to finish. I hope to return to this subject soon.
Until then.