How to delete all pods from a specific namespace with kubectl?

Run the following to get all namespaces exactly:

kubectl get pods --all-namespaces

The first column shows the namespaces and the second column shows pods as follows:

NAMESPACE      NAME                                       READY   STATUS             RESTARTS        AGE
kube-flannel   kube-flannel-ds-skdpx                      0/1     CrashLoopBackOff   20 (3m6s ago)   27m
kube-system    coredns-64897985d-kfssj                    0/1     Pending            0               10m
kube-system    coredns-64897985d-p96wd                    0/1     Pending            0               10m
kube-system    etcd-ip-172-31-45-235                      1/1     Running            0               34m
kube-system    kube-apiserver-ip-172-31-45-235            1/1     Running            0               34m
kube-system    kube-controller-manager-ip-172-31-45-235   1/1     Running            0               34m
kube-system    kube-proxy-vttwh                           1/1     Running            0               34m
kube-system    kube-scheduler-ip-172-31-45-235            1/1     Running            0               34m

You can also run the following command to get all pods from the namespace if there are too many, for example from kube-flannel:

kubectl get pods --namespace=kube-flannel

Replace kube-flannel with any other namespace you want.

Next to delete pods from kube-flannel or any namespace run the following

kubectl delete pods kube-flannel-ds-skdpx --namespace=kube-flannel

Replace kube-flannel-ds-skdpx with the name of your pod.

Finally if you want to delete the namespace with kubectl:

kubectl delete namespace kube-flannel

This is really annoying from kubectl I really wish there was easier way so let me know if you have one in the comments!


Step by Step guide on how to get started with Drogon on Windows with Visual Studio

This blog post explains step by step (in as much detail as I could remember) how to start working with the world’s fastest web framework on TechEmpower’s Fortune benchmarks (at the time of writing) Drogon c++ on Visual Studio on Windows 10.

Disclaimer that this was only tested with Visual Studio 2022

TechEmpower’s Fortune benchmark covers ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding

Step 1: Install Visual Studio

If you install Visual Studio 2022 you will by default install vcpkg together with it. There is a limitation with that version though, it only works in manifest mode. This guide uses the classic mode which means that we will later cover how to install vcpkg independently.

Anyway launch Visual Studio Installer, click on modify then install Desktop Development with C++ as shown in the picture below.

Step 2: How to install vckpg on Windows?

  1. Open a Command Prompt in Administrator mode. This is important for the setup. Without Administrator mode you will get “error: failed to install system targets file vcpkg”.
  2. Follow the steps at which firstly to clone the repo:
  3. git clone
  4. Run the batch file: .\vcpkg\bootstrap-vcpkg.bat
  5. Start -> type “Edit the system environment Variables” -> “click on “Environment Variables” -> Edit path and add the path to vcpkg as shown below

6. Now open a new command prompt again in administrator mode. Run the following:

vcpkg integrate install

7. Next install drogon by running

vcpkg.exe install drogon[ctl]:x64-windows or

vcpkg.exe install drogon

Step 3. How to make Visual Studio use your custom vcpkg.exe?

  1. Open Visual Studio
  2. Create a CMake C++ Project or clone this sample repo:
  3. Go to Tools->vcpkg Package Manager->Use custom path to vcpkg.exe then point it to the .exe that you have just installed as shown below:

4. Open CMakePresets.json file and point to the custom toolchain file. For example the repo above uses “toolchainFile”: “C:\Users\iamyo\projects\vcpkg\scripts\buildsystems\vcpkg.cmake” which you will need to change to your own.

5. Modify the CMakeLists.txt to use C++ 17 or 14 using the following:

set_property(TARGET blog PROPERTY CXX_STANDARD 17)

6. Modify the CMakeLists.txt of the project to include drogon as follows:

find_package(Drogon CONFIG REQUIRED)
target_link_libraries(blog PRIVATE Drogon::Drogon)

You can check the sample repo to see what it looks like if you are confused as to which file to modify. That’s it you should now be able to compile and run your project on Visual Studio!


What is the best mouse to play Apex Legends?

The best mouse to play Apex Legends or any other FPS game can be any model but what is most important is to know how to find a good mouse. Below are a few qualities that I think are important.

When evaluating a gaming mouse, sensitivity, durability, and ergonomics are three of the most significant factors to consider. These features can significantly impact a player’s overall performance and comfort. In this guide, we will delve into the methodology of testing these crucial aspects to find the perfect gaming mouse.

Testing Sensitivity

Sensitivity is gauged by a mouse’s DPI (Dots Per Inch) and is essential in determining the mouse’s speed and accuracy. High DPI means the cursor will move faster on the screen, while low DPI slows it down, providing more precision.

To test sensitivity, you’ll need a consistent testing environment, preferably a game you’re familiar with, where you can predict the targets. A game with a shooting range or practice mode is perfect for this. Start with a low DPI setting and gradually increase it. Test each setting by trying to hit targets as quickly and accurately as possible. Monitor your comfort level and accuracy at each DPI setting. It is vital to remember that while high DPI mice are often marketed as better, each player’s ideal sensitivity is subjective and dependent on their play style.

Testing Durability

Durability often boils down to the build quality and longevity of the mouse. One of the practical ways to assess a mouse’s durability is by examining its construction. High-quality materials such as a robust plastic or even metal body, reinforced left and right click buttons, and braided cables in the case of wired mice, often signify durability.

To assess the durability over time, you can look at online reviews from users who have used the mouse for extended periods. Pay attention to recurring issues such as double-click problems, wear on the scroll wheel, or deteriorating skates. Also, consider the manufacturer’s warranty period, as this can often indicate the confidence they have in their product’s longevity.

Testing Ergonomics

Ergonomics is all about how comfortable the mouse is during long gaming sessions. An ergonomic mouse should fit well in your hand, support your grip style (palm, claw, or fingertip), and not cause any strain even after hours of use.

When testing ergonomics, consider the size of the mouse relative to your hand and your grip style. The mouse’s weight is also an essential factor: some players prefer a heavier mouse for stability, while others favor a lighter mouse for swift movements. Most high-end gaming mice have customizable weights to accommodate this preference.

Button placement is another factor to consider. All buttons should be easily accessible without straining your fingers. The texture of the mouse also plays a role, with some gamers preferring a smooth finish while others opt for a textured one for better grip.

Lastly, test the mouse over an extended gaming session to see if it causes any discomfort or strain in your hand, wrist, or arm. Keep in mind that ergonomics is highly subjective, and what feels comfortable can vary widely from person to person.

Sensitivity, durability, and ergonomics are key factors in finding a gaming mouse that fits your needs. It involves a combination of objective testing and subjective feeling, as everyone’s preferences differ. By using this guide as a foundation, you can better assess your options and find a mouse that will enhance your gaming performance while providing optimal comfort.

If you would like to read more we have other articles on Apex Legends checkout Can you play Apex Legends with keyboard and mouse on ps4 & ps5 ?


Leadership in times of crisis – JCI Edition

I recently attended a panel discussion on this exact subject conducted by the Luxembourg School of Business with Paul Green, Jr. from McCombs School of Business and Jan Muehlfeit, former chairman of Microsoft Europe.

And throughout the talk I couldn’t help but to draw a parallel between the discussion and how we form Leaders in JCI.

So what is JCI anyway?

JCI, or Junior Chamber International, has often been referred to as the “world’s best kept secret” because it is not as widely known or recognized as other larger organizations.

This is primarily due to a lack of public awareness as JCI focuses on developing young (18 – 40) leaders and creating local community impact, often resulting in limited media attention. This is particularly true in the African region.

JCI’s decentralized structure and emphasis on individual development can limit its visibility on a global scale. With limited resources for marketing and publicity, JCI’s primary focus is on projects and member development.

Nevertheless, JCI’s impact is significant as it provides valuable leadership opportunities, fosters positive change through community projects, and promotes international collaboration. While it may lack public recognition, its influence is deeply felt by its members and the communities they serve.

Let’s dive deeper into how JCI provides Leadership development opportunities.

Leaders vs Managers

During the panel discussion, the speakers defined the main difference between a Leader and Manager, as someone (Leader) who has a vision, who can inspire people to look beyond an immediate crisis and have a long term perspective for the business. While a manager is more focused on operational efficiency, meeting targets, and resolving immediate challenges.

But the line is blurred in the real world, there should be a balance between both in an individual. And, in my personal opinion, while management skills are widely taught, Leadership qualities are harder to acquire through conventional education.

JCI is the best way that I know of to teach someone Leadership skills.

Leading projects with volunteers

Running projects in JCI is different from running a team in the corporate world and more like running a startup. The main difference that I referring to is paying an employee to work for you.

Throwing money at a problem(or employee in this case) is a solution that one often tends to during recruitment. In JCI you have to make volunteers work on a project for free while taking time off from their personal lives. This requires a whole different set of skills.

  • You have to clearly articulate your vision and connect with the other members to work on the project.
  • You, as a project director, have to lead by example to show your commitment.
  • Your volunteers might come from different backgrounds. For example, may be you are lucky that you have an accountant as volunteer to handle your budget. But in JCI it can happen that you don’t, and in this case you need to know how to empower that member to handle the budget.
  • You have to know how to foster an inclusive environment. The members will only truly devote themselves to the project when they feel that they are being heard. This in turn fosters a sense of ownership, engagement, and shared purpose.

I use volunteer and member interchangeably in the paragraph above but I am referring to the same thing.

Starting projects with 0 budget

The term lean startup has become popular in the past decade. With JCI we go one step further and define a 0 budget project. Disclaimer to anyone from JCI reading this that this is something specific to my local chapter.

What we mean by 0 budget is to start a project with a simulation of an empty bank balance. That is, we only use funds that we can raise for the project. This means sponsorships, partnerships or selling tickets. And at the end of project we make sure that the project either breaks even or has a small surplus.

Sounds hard? Well it is, but it’s the best way to learn Leadership skills. In a way, JCI projects simulate times of crisis and this is why I felt, when attending the panel discussion, that the attendees wouldn’t really learn how to lead in times of crisis unless they actually practice it.

And one way to do that is to join JCI.

How do I join JCI?

JCI is present in more than 100 countries. I would suggest you search for “JCI <insert local town name>”. For example “JCI Curepipe” or “JCI <insert country name>” like “JCI Mauritius” in Google Search.

If you can’t find any then feel free to comment below and I will help you out 🙂


How to fix “Something went wrong. If this issue persists please contact us through our help center at”?

I struggled a lot with this and turns out that for me, at least, the issue was with my VPN. I used NordVPN so it might be different for your case.

Just disconnecting the VPN was not enough for me. If you are using NordVPN you need to go to Threat Protection and turn off or pause Web Protection as shown in the screenshot below:

Once you do that you can regenerate a response and it should hopefully work.

Reason 2 might be that your browser is using a proxy. Try to turn it off. You will need to look for it depending on the browser that you are using.

The other reason might also be that openai is just overloaded at the moment! Let us know if this helped in the comments 🙂 Cheers!


What happens if scrum teams become too large?

In Scrum, it’s recommended that teams are small and cross-functional, typically consisting of 5-9 members. This is because a larger team can make communication and coordination more difficult, and may lead to a loss of efficiency, productivity, and effectiveness.

If a Scrum team becomes too large, some of the potential consequences are:

1. Communication breakdowns

Larger teams may have more difficulty communicating effectively, leading to misunderstandings, missed deadlines, and inefficiencies.

2. Coordination challenges

With more team members, it can be harder to coordinate efforts, plan sprints, and make sure everyone is aligned on the team’s goals and priorities.

3. Difficulty in maintaining focus

Large teams may struggle to stay focused on the most important tasks, leading to delays and lower quality work.

4. Increased overhead

A larger team can require more resources and time to manage, which can create additional administrative overhead.

5. Reduced ownership and accountability

With more team members, individuals may feel less ownership and accountability for their work, which can lead to decreased motivation and productivity.

What is the solution to this problem?

The solution is to split the team into smaller, cross-functional teams. This is known as “scaling Agile” or “Agile at scale”.

1. Scrum of Scrums (SoS)

In this approach, each team sends a representative to a higher-level meeting called the SoS meeting. This meeting focuses on discussing and resolving any cross-team dependencies, risks, and issues. The Scrum of Scrums representative is responsible for communicating the decisions made in the SoS meeting back to their team.

2. Large Scale Scrum (LeSS)

This approach is designed for larger organizations that require multiple Scrum teams to work together. LeSS is based on a set of principles and rules that help to coordinate and align the work of multiple Scrum teams. It includes practices like shared product backlog, joint sprint planning, and joint retrospectives.

3. Scaled Agile Framework (SAFe)

This approach is a comprehensive framework that provides a set of roles, processes, and artifacts for scaling Agile. SAFe is designed to support the coordination and alignment of multiple teams across an enterprise. It includes practices like program increments, value streams, and release trains.

In summary, if a Scrum team becomes too large, the solution is to split the team into smaller, cross-functional teams, and adopt a scaling Agile approach that suits your organization’s needs.


Why I am dropping Google AMP for this blog

A few years ago in my search to make this blog faster, I discovered Google AMP. The learning curve to get started was acceptable to me and I didn’t have a lot of pages to change to make this happen.

Support for AMP at that time was OK-ish, although it did look like interest in the project was dwindling. I should have probably taken a hint that it was a bad idea to stick with AMP at that time.

Anyway, after modifying the blog’s template, adding AMP ads, removing JS and CSS code to make it AMP compliant I have to admit that despite the PITA that setting up AMP is, I was actually satisfied with the performance improvement it brought me.

But the fact that the latest Google GA4 not yet supported by AMP after so many years and that there is no clear roadmap for its support is what I would say broke the camel’s back for me.

But does Google GA 4 support AMP or plan to support AMP?

In this tweet below dated 4th of May 2023 you can see that although Google plans to support GA4 on AMP there is currently no timeline for them to implement it.

At the time of writing, there are around 60 more days left until GA 4 is the standard and we lose the old analytics. There is a workaround you can try from a third party

I would like to add a disclaimer here that I have not tried David’s blog post. Anyway, let’s see what happens.

Artificial Intelligence

How big are Large Language Models on disk?

This question hit me when I was downloading Galactica-1.3b model from The only way I have found so far is to go to the model’s HuggingFace page and then click on Files and Versions. For example for Galactica

You should then see something like the following:

screenshot from hugging face showing size of the model

This will show the files in the model. Check pytorch_model.bin and on the right, it should show you the size of the model as pointed by the red arrow in the screenshot. In this case it shows you that the size is 2.63 GB.

Which also means that you need to run git lfs install before downloading the model before git clone

Of course, this only applies to open source models. If you know any other way to find the size of a LLM let me know in the comments. Thanks!

Artificial Intelligence Development

You can now run Machine Learning on EventStoreDB through MindsDB

I have recently stumbled upon MindsDB on Product Hunt. The way it works is by first connecting to a data source, creating a machine learning model, training and then running a prediction. All this is done in a SQL-like query.

So I decided to add one more database integration, my favourite and the best state transition database, EventStoreDB. At the moment my Pull Request is still in review at (update: it’s merged!).

What this means is that you will need to checkout that branch and build it locally if you would like to use the EventStoreDB integration. Thankfully, this is not hard to do.

How to build MindsDB locally for development?

In a few steps:

  1. Make sure you are using Python 3.9
  2. Checkout the branch that you want locally
  3. Go to the folder create a virtual env (python -m venv .)
  4. Activate the environment, for example: on Linux: source <venv>/bin/activate or on Windows C:\> <venv>\Scripts\activate.bat
  5. python develop
  6. python -m mindsdb (A web UI should pop up)
  7. Or you can open it in PyCharm (configure it to use the same interpreter and venv). You can be able to just run it.

How to use the EventStoreDB Integration?

Access the Web UI at if it has not yet opened.

MindsDB UI

Before creating a connection, you should make sure your EventStoreDB is running with EnableAtomPubOverHTTP=True, RunProjections=All and to enable the $streams projection. This is required to allow MindsDB to get all the available tables i.e. streams. Make sure that $streams stream has at least one available stream.

The integration treats EventStoreDB streams as tables and every JSON Event’s data key as column. Events with nested JSON are flattened with underscore as the separator.

Once you have done that, you can run a query to create a connection to your EventStoreDB. You could use the following for an insecure ESDB node.

WITH ENGINE = "eventstoredb",
    "host": "localhost",
    "port": 2113,
    "tls": False

Or if you would like to connect to a secure node

WITH ENGINE = "eventstoredb",
  "user": "admin",
  "password": "changeit",
  "tls": True,

If this is successful you should then see on the left a drop down to display all the tables (streams).

How to run Machine Learning on your EventStoreDB data?

Now that we have access to EventStoreDB’s data, we can create a model and train it on that data. For the sake of this example we will use a stream named house_price_changes and events with the following data format:

"EUR": int,
"year": int

Add a few dummy data, if you are having trouble doing that you can follow this article: Using F# and EventStoreDB’s gRPC client is easy

Once you have at least 10 events, you can then create a model.

You should be able to see it in your MindsDB editor as follows:

You can get a quick statistical analysis on your data by clicking on the data insights button.

For example this shows that we have a gap in your data and that we are missing data for the year 2006.

Creating a simple ML model in MindsDB

The simplest model is to just do regression on the data and to predict the EUR field based on the year field.

CREATE MODEL mindsdb.home_price_model
FROM house_price
  (SELECT * FROM house_price.home_price_model)
USING engine = 'lightwood',
      tag = 'house price model';

This will cause MindsDB to start training the model. This can take a while so if you would like to check the status of the training you can run a select command on the model name. For example:

FROM mindsdb.models
WHERE name='home_price_model';

Or you can click on the drop down on the left showing all models.

Making prediction with MindsDB

Once training is successful, you can start making predictions. You do that by making a SELECT query against the model you just created. For example:

FROM mindsdb.home_price_model
WHERE year=2023;

Here we are using the data that we have (2000 to 2010) available to predict the price of a house in the year 2023. This will produce a result in EUR (your prediction).

This is the simplest version of Machine Learning. There are more interesting(and complicated) things that can you run and predict at

Have fun! 😀


Visualize your EventStoreDB data with Polyglot Notebooks

In this post we look into:

  1. Connecting to ESDB
  2. Writing events
  3. Reading the events
  4. Deserializing the events
  5. Sharing the events with Javascript code
  6. Visualizing the events with d3js.

You can check the code at:

How to create a Polyglot Notebook project in VS Code

Step 1: Install Visual Studio Code

Step 2: Install the extension and .NET 7 SDK

Step 3: From Visual Studio code: Click Help->Show All Commands->Polyglot Notebook create default notebook as shown below:

Then choose Choose .ipyb and then F#.

You are now ready to write interactive code in multiple languages as we will see further. By the way Visual Studio Code has likely installed .NET interactive in background.

How to specify different languages in Polyglot Notebooks

You can either write #!fsharp at the start of the cell or you can click and choose bottom right as show in the screenshot. Upon clicking on it you will find all the Notebook Kernels available to run your cells.

For example:

How do you install NuGet Libraries in Polyglot Notebooks?

Next we need to import our EventStore gRPC Client libaries like we did in Using F# and EventStoreDB’s gRPC client is easy

To do that we need to add a special #r and then details about our NuGet package. So for us the following:

#r "nuget: EventStore.Client.Grpc, 23.0.0"
#r "nuget: EventStore.Client.Grpc.Streams, 23.0.0"

If you are not sure what to write, the web page of the NuGet package will help you. Click on the Script & Interactive tab:

If you then click on the Play button of the cell you will see that VS Code will try to install the packages:

Once this is done, click the +Code button to create a new cell.

Write Events To ESDB with FSharp

Let’s start by writing events. We already covered that in Using F# and EventStoreDB’s gRPC client is easy. We need to create a list of events and send them to ESDB. We can do that as follows:

open EventStore.Client
open System.Collections.Generic

let client = EventStoreClientSettings.Create "esdb://"
             |> EventStoreClient

let streamName = "house_price_changes"
let eventsList = List.init 10 (fun index ->
        ReadOnlyMemory<byte>(Encoding.UTF8.GetBytes("{\"USD\":"+ string(Random().Next(1000,2000))+", \"year\":"+string(2000+index)+"}"))
client.AppendToStreamAsync(streamName, StreamState.Any, eventsList).Wait() //assuming stream doesn't exist

We are creating JSON events, you can use System.Text.Json to serialize types if you want. Later we will cover how to Deserialize your JSON events in FSharp.

Adding D3.js code

Create a new cell, this time put the kernel as Javascript or add #!javascript at the start. In the code, we will create a bar plot. You can do more complex visualizations but the underlying principles are the same.

configuredRequire = (require.config({
    paths: {
        d3: '[email protected]/dist/d3.min'
}) || require);

var caller = null;
plot = function (data) {
    configuredRequire(['d3'], d3 => {
        // Call d3 here.
        const margin = {top: 30, right: 30, bottom: 70, left: 60},
        width = 512 - margin.left - margin.right,
        height = 512 - - margin.bottom;

        // append the svg object to the body of the page
        const svg ="#my_dataviz")
            .attr("width", width + margin.left + margin.right)
            .attr("height", height + + margin.bottom)
            .attr("transform", `translate(${margin.left},${})`);

        // X axis
        const x = d3.scaleBand()
        .range([ 0, width ])
        .domain( { return d.year; }).reverse()) //because I was reading backwards

        .attr("transform", `translate(0, ${height})`)
            .attr("transform", "translate(-10,0)rotate(-45)")
            .style("text-anchor", "end");

        // Add Y axis
        const y = d3.scaleLinear()
        .domain([0, 2000])
        .range([ height, 0]);

        .attr("x", function(d) { return x(d.year); })
        .attr("y", function(d) { return y(d.USD); })
        .attr("width", x.bandwidth())
        .attr("height", function(d) { return height - y(d.USD); })
        .attr("fill", "#69b3a2")

Here we have imported d3.js from an external source, we are using the year as x-axis and USD as y-axis. We have mapped our domains based on the values in our data to the different positions on the x-axis and height of rectangles (our ranges). D3.js is mainly about mapping data to domains to ranges to shapes in SVG.

Note: If you want to add transitions you can check examples here: At the time of writing “true” streaming is not supported, you have to use js intervals and call d3 to update your graphs if you want them to change live with data connected with ESDB.

Reading our data and Deserializing JSON to a FSharp type

open System.Collections.Generic
open FSharp.Control
open System.Text.Json //to deserialize JSON event data to house_price_change type

type house_price_change = { USD : int ; year : int }
let eventToPair (resolvedEvent: ResolvedEvent) = //convert resolved event json data to house_price_change
    let JSONEventString = Encoding.UTF8.GetString(resolvedEvent.OriginalEvent.Data.ToArray())
    JsonSerializer.Deserialize<house_price_change> JSONEventString

let priceChanges  = List<house_price_change>() //we will append data that is read from ESDB and share this list with Javascript

    |> TaskSeq.iter (fun event -> (priceChanges.Add (eventToPair <| event)))
    |> Async.AwaitTask
    |> Async.RunSynchronously

Here we create a type house_price_change type house_price_change = { USD : int ; year : int } and we use System.Text.Json to deserialize our event’s JSON data to that type. This happens in our handler in ReadStreamAsync when we call our eventToPair function. In retrospect this would have better been called eventToHousePriceChanges 😅

We could have also used the serialize function to do the reverse when we were appending events.

Anyway, everyone is an expert in retrospect!

Sharing our variable to Javascript and plotting our data

<div id="my_dataviz"></div>

#!share --from fsharp priceChanges
console.log(priceChanges); //we receive the variable in a nice format

Finally, we create a cell with first html code to create a div which will hold our graph. Then we tell the cell to start handling javascript code with #!javascript.

More interesting here, we tell the cell take our priceChanges variables which is a list holding a specific type. This works surprisingly well as you will see from the console output:


Then we pass that variable to out plot function defined before which will plot our price changes. This should generate a graph like the one below:

That’s it, you can continue to build on top of this. If you add other kernels like Python you could add even add machine learning to your notebooks (have not tried).

If you are confused about any part of this post please feel free to comment!