
A Wonderful Test Post
Sometimes you just need to test a post and see if it works.
Published on 16 August 2025 by Maarten Goudsmit
Always be testing those posts!!
Sometimes you just need to test a post and see if it works.
Published on 16 August 2025 by Maarten Goudsmit
Always be testing those posts!!
Your no-nonsense, emoji-fueled guide to setting up the ChatGPT Codex CLI from scratch — even if you've never touched a terminal before. 💻✨
Published on 15 August 2025 by Maarten Goudsmit
If you’ve ever wanted to harness the power of ChatGPT Codex straight from your terminal, you’re in the right place. Whether you’re building quick scripts, automating repetitive tasks, or just having some AI-powered fun, the CLI (Command Line Interface) is your trusty sidekick. This guide is designed for absolute beginners — no prior CLI wizardry required.
First things first: make sure you’ve got Python installed 🐍. Open your terminal (on macOS, it’s Terminal; on Windows, use PowerShell or Windows Terminal) and type:
python --version
If you see something like Python 3.10.6
, you’re good to go. If not, head over to python.org/downloads and install the latest stable version.
Next, you’ll need an API key from OpenAI. This is like your personal password to the AI kingdom 🏰. Sign up or log in at platform.openai.com, then navigate to View API keys. Copy the key somewhere safe. Never share it — think of it as your AI credit card number.
With Python ready and your API key in hand, install the openai
Python package. In your terminal, run:
pip install openai
This gives your CLI the ability to talk to ChatGPT Codex. Now create a simple Python script to test things out. Save the following as codex_test.py
:
import openai
openai.api_key = "YOUR_API_KEY"
response = openai.Completion.create(
engine="code-davinci-002",
prompt="Write a Python function that says hello",
max_tokens=50
)
print(response.choices[0].text.strip())
Before running this script, replace "YOUR_API_KEY"
with your actual key 🔑. Then run:
python codex_test.py
If everything is set up correctly, you’ll see a friendly Python function printed out. 🎉
From here, you can start exploring more advanced uses. You can pipe prompts directly from the terminal:
echo "Write a haiku about APIs" | python codex_test.py
Or set up environment variables so you don’t hardcode your API key. On macOS/Linux:
export OPENAI_API_KEY="your_api_key_here"
And on Windows (PowerShell):
setx OPENAI_API_KEY "your_api_key_here"
Finally, remember: the CLI is just the start. Pair it with shell scripting, cron jobs, or even Git hooks, and you can have Codex generating, refactoring, and documenting code on autopilot. For inspiration and more examples, check out the OpenAI API documentation. The sky’s the limit ☁️ — now go build something amazing!
Learn how to make your words look good online with Markdown — the simplest formatting tool you never knew you needed, but maybe always did.
Published on 4 August 2025 by Maarten Goudsmit
Markdown is a lightweight markup langauge that lets you format text for the web with just a few characters. It’s simple enough for begginers yet powerfull enough for pros. Many blogging platforms, including GitHub and Reddit support it out of the box.
To create headings, you use hash marks. For exampel:
# This is a heading 1
## This is heading two
### heading three
You can also make text italic or bold by wraping it in *
or **
respectivly. Inline code snippets
are also easy — just surround text with backticks.
Images are just as easy! But be careful — here's an incorect syntax that wont work:
The correct syntax is:
Lists in Markdown can be unordered or ordered. Here's an unordered one:
And here's an ordered list:
Sometimes you'll want to link to a website inline like this: Markdown Guide, or you might forget to close the bracket [like this(https://example.com).
Quoting is also fun:
This is a blockquote but I forgot to capitalize the first letter.
For more info, check out the official Markdown guide or Markdown cheat sheat which is very handy.
Tables are possible too:
| Name | Age | | ----- | --- | ---------- | | Alice | 30 | | Bob | 25 | Extra cell |
In conclusion, Markdown is not just easy, its also a joy to use. Start small, practice often and you'll be a pro in no time.
An exploration of the evolution of large language models, from early statistical methods to today's cutting-edge AI systems.
Published on 1 August 2025 by Maarten Goudsmit
Large language models (LLMs) have transformed the landscape of artificial intelligence. These systems are designed to process and generate human-like text based on vast amounts of training data. Their history is closely linked to advances in both computational power and the availability of large text datasets. From humble beginnings in statistical language modeling to the neural network revolution, LLMs have been at the forefront of AI's most exciting breakthroughs.
In the early days, statistical methods such as n-grams dominated natural language processing (NLP). These models relied on counting word sequences in large corpora to estimate probabilities. While effective for some tasks, they struggled with longer-range dependencies and lacked the ability to generalize. You can read more about this era of NLP on Wikipedia.
The arrival of neural networks in the 1980s and 1990s marked a turning point. Recurrent neural networks (RNNs) and later Long Short-Term Memory networks (LSTMs) addressed some limitations of n-gram models by introducing mechanisms for remembering information across longer sequences. Still, these architectures faced challenges with very long contexts, leading to further research.
A major leap came in 2017 with the introduction of the Transformer architecture by Vaswani et al. in the paper Attention Is All You Need. Transformers eliminated the sequential bottleneck of RNNs by using attention mechanisms to process tokens in parallel. This enabled the training of much larger models and opened the door to scaling up to unprecedented sizes.
Since then, models like OpenAI's GPT series, Google's BERT, and Meta's LLaMA have demonstrated remarkable capabilities in language understanding and generation. These models are trained on massive datasets with billions—or even trillions—of parameters. The shift toward large-scale pretraining followed by fine-tuning has become a defining paradigm in NLP research.
Notable milestones in the development of LLMs include:
The rise of LLMs has also brought about challenges and concerns. Issues like bias, misinformation, and environmental costs of training large models have sparked debates within the AI community. Researchers and policymakers are working on strategies to ensure these systems are developed and deployed responsibly.
Looking ahead, the future of LLMs will likely involve models that are more efficient, interpretable, and aligned with human values. Hybrid systems combining symbolic reasoning with neural architectures may emerge, as well as advances in multimodal AI that can understand and generate not just text but also images, audio, and more. For further reading, the Stanford Center for Research on Foundation Models offers a comprehensive overview of current developments.