Mind Circuitry

musings, realisations and contemplations.

Now that the dust is settling on the CrowdStrike outage, it's worth looking at some of the technical things we should be doing to prevent this in the future. Naturally, there is the question of why the QA process never picked up the issue in the content update, but that is not the focus of this article – Here we are looking at what we can implement to mitigate any similar future issues.

Firstly, it's worth saying big thanks to all the technical folk out there who worked to get the systems back up and working. They probably didnt choose the software, but they are responsible for ensuring it is up and working. These are the folks that when everything is working perfectly, there is no “thank you” being said! As someone who works in IT and does work behind the scenes ensuring systems are “up” and “performing as intended”, I understand and appreciate the work being done to resolve computers affected by this incident.

Thank you! you do an amazing job! 👍

Resilience

Before we look at some ideas to prevent\mitigate these types of issues in the future, lets just remind ourselves of how resilience is deployed in modern computing.

Network

When thinking about mission critical systems, we are always taught to ensure there is resilience across multiple layers. The most obvious and basic is multiple internet links from separate carriers. This requires some magical routing to work, and is generally now well established. We access sites and services across the world, circuits go up and down, and still the traffic flows. Ok, it maybe a little slower, but it gets there and we don't notice.

Virtualisation

The next layer of resilience is that of computing power. We run our services on multiple servers, that are distributed across multiple physical hosts, across multiple racks. If a rack of physical servers goes down, or need maintenance, then our service continues working. This is now also a very well established design pattern in computing architecture.

Application

Within the virtualisation layer, further resilience is applied – our application runs on multiple operating systems, spread over multiple physical hosts, in multiple racks. Now, if our appliation crashes on one virtual server, then another instance picks up the slack. Yet again, this is now a very common design with technologies such as containerisation and application load balancers.

Cloud

So we have all our wonderful networking routing and our application is load balanced across multiple virtualised servers. Everything is wonderful, but its hosted in my office. The next part of resiliency is that of “The Cloud”. The cloud involves using all the above resilience techniques but in a highly resilient and managed set of data centres across the globe. Now, we can have our application hosted on multiple virtual machines, across multiple racks, in multiple data centres, all with multiple network connectivity.

This is it now isn't it? What more do I need?

Multi-Cloud

Ok, so what if my region in my cloud provider goes down? That's OK, I have my application in two regions. But what if my cloud provider suffers a major outage in their networking? Yes, it's rare, but does happen. On exactly the same day as the CrowdStrike outage, Microsoft had an outage in their Central US region (ID: MO821132), which impacted the majority of their M365 services. Hang on a moment – surely Microsoft have their services geo-distributed? Yes, they do, but there will no doubt be some backend services that are only hosted in a specific region, and this incident highlighted that.

The three major cloud providers don't make this particually easy, although there are signs it is getting easier. For example, Microsoft Defender for Cloud can monitor AWS and Google Cloud workloads for security issues. Azure Traffic Manager (a DNS load balancer) can route traffic between endpoints hosted anywhere on the public internet.

(I'd like to provide examples of where AWS and Google allow multi-cloud connectivity, but my day job is mostly focused on Azure, thus this is my area of expertise!)

Of course, it is possible to join networks together with Virtual Private Networks, Mesh Networking, or leased lines (such as AWS DirectConnect \ Microsoft ExpressRoute) but these can be costly. The use of multi-cloud wouldn't however helped any specific company with this outage, however, you can see that it's another aspect of resiliency that is getting more popular.

Improvements

So what improvements can we make beyond the above to help weather the storm for future issues similar to this?

1. Critical Vendors

What about Vendors? Why do we insist on just having Acme Corporation provide all our endpoint security services? As we've seen with CrowdStrike, one dodgy update that slipped through QA can have dramatic effect. How about having two vendors for your endpoint security services? Everything else in our tech stack above has been made resilient, but why not security services? In the example of our application being hosted on four virtual machines, two VM's each would run different security software.

One issue that maybe preventing this, is the management platform. For this to work, IT Departments would require separate management platforms for each of their endpoint security services they are running. There is a gap in the market here – for an open source platform that can provide basic management of such software, reporting on elements such as:

  • are my hosts up?
  • when where they last updated?
  • push out latest update

Of course, this would probably need some co-operation between vendors, but it's a nice thought that one day we could do this? 🤔

2. Delays

Back in the days of Windows NT, I recall we would never apply a service pack as soon as it was released. This trend still happens now in some areas, for example – Never installing latest OS in production as soon as it's released. At a number of organisations I've worked at, I've recently been introducing a delay to patching in production.

For example: Following Patch Tuesday – on the following Monday, all Development Servers get patched. Then one week later, all Test Servers get patched. And the following week (exactly two weeks after Patch Tuesday) Production servers gets patched.

Another routine I introduced at another company was after Patch Tuesday, we would auto-deploy to internal servers on Wednesday, and then production servers on Friday and Saturday.

Of course, these may not suit every businesses, but you get the idea – even just having a short delay of a day can help, and give the technical teams a critical thinking break should anything untoward happen.

For endpoint security which relies on signature\definition\content updates (of some type), it's a little more difficult – we the technical user assume these are small updates that can be applied without second thought. But yet again, could these updates be delayed just by one day? Do they really need to be installed the minute they are released?

3. Pragmatic Deployments

This of course is very dependant on the environment, but does the endpoint security software have to be installed on every server in the environment? For example, in this incident there were many iamges in the media of photos showing the BSOD (Blue Screen of Death) error. I'm making assumptions here, but are these servers just displaying information for the general public? Could other security prevention techniques be deployed instead, such as network isolation, server hardening (e.g. disabling ALL unrequired services) or robust user access control?

Final Thoughts

There is a lot of finger pointing at Microsoft, but in my eyes this is wrong – The Microsoft shared responsibility model outlines that the customer is responsible for any services\applications deployed on their virtual machines (IaaS).

There is also further critism saying that we should all move to Linux or Mac. (but it went without notice that Crowdstrike had a very similar issue with Debian Linux in April 2024). We all have a choice to make on what operating system or application we use, however we should always assume that there will be issues in the future and plan appropriately to mitigate them.

No technology or process is perfect.

By looking holistically and pragmatically at the systems and their dependancies that we support, we can hopefully reduce any future impact of outages similar to the one that we have all just suffered.

Company Meetings

For the past couple of years, the company I work at have an annual conference in which they do presentations, team building events and an opportunity to socialise and network with the rest of the company. Every company has these types of events – you know the score – the CxO's outlining their strategies, or new initiatives for the following year, with some drinks throughout to make it more bearable, and some awards to recognise and promote “good work”. It's all stock stuff.

Now, some people love these events – from the initial “getting dressed up”, to the opportunity to pout the lips in every selfie with various other members of the different (usually customer facing) teams. These are the staff that even if there is a table plan, somehow, they all manage to sit together in their “clicky groups” looking down on the rest of the company. Then, on the opposite end of the spectrum, there is the standard techie who doesn't like/do/have the confidence or upfront attitude of sales and commercial teams. In general, they would rather just get on with their work elsewhere with their headphones on doing what they are paid to do. Socialising with the commercial teams and other “loud” teams does not interest them – in fact, it repulses them. On the table plan, HR try and mix up the people so that they get to know others, which usually involves putting a complete introverted techie amongst a few commercial extroverts. Both sides hate this idea – extroverts say, “X is so quiet” and introvert techies quietly fiddle with their phone counting down the minutes until finish time.

In case it's not clear – I'm most definitely in the introvert camp.

“Please reserve this date”

This year our company is 10 years old, and therefore the company conference\meeting was due to be bigger and better than others. Two years ago, the company was quite small and thus these conferences could be held in the local area, however due to their size and requirements for the conference, they had to look further afield, and in a different county entirely.

Invites were sent out months ahead to ensure that everyone could attend, and on the meeting invite, it even said mandatory (although there was some debate over how mandatory it was). Argh! As one of those introvert techie's I highlighted above, the dread was already setting in as I contemplated how painful this would\could be, amplified by the fact it was over an hour away's drive with everyone in the company, which was now considerably larger than the previous one. The company were also offering free hotel accommodation for that evening, so that everyone could have a good time and not worry about getting home same night. Very thoughtful and generous, however, in the style of “Dragon's Den” – I'm out. I like to go to bed approx. 10.30pm and sleep in my own bed and home wherever possible, and ideally being woken up at 5am by a cat sitting on my head wanting breakfast.

I'm in middle management, and I'm in a technical role (Cloud Engineering), so it was clearly important that I attend. Fine – I'll put my dread to one side – it is just attending – how hard can that be?

Approximate two weeks before the event, the Operations Manager came to me asking for help with their project. They wanted to know what I'm doing in my team that supports the running of the business. We work closely together, as my team provide some 3rd line technical support for the operations team occasionally, so I was happy to help. I explained some of the initiatives we are doing, how they align with the company goals and objectives.

I have no idea what happened at this point, but I stupidly (at the time) said “if you want, I'll talk about my department at the conference?” Why oh why did I say that? I reflected on this over the past couple of weeks and I think subconsciously I wanted to improve my profile within the company, however I was now not looking to this event even more! In those two weeks before the event, I created a 5-minute presentation on some of those departmental initiatives and did everything I could to make it engaging using analogies and metaphors. The instructions for the presentation time from Senior management was to “make it short and snappy”. I guess these instructions were provided as there is thought that Operations and Cloud Engineering is a somewhat dull subject, and those commercial teams won't be interested? 🤔

Conference Day

I use a Garmin watch mostly for my running and cycling activity tracking, but it also has a useful feature called Stress Tracking. Garmin state that this uses a combination of Heart Rate and Heart Rate Variability to understand your body's natural response to the challenges of life and environment.

Here is a snapshot of my stress on a day where I am in the office:

Good day of stress

You can see some exercise in the morning, the cycle ride to the office and then some high stress. This appears to be common after exercise, however what's key is for the morning I was just sitting at my desk doing my work. Lunchtime came, had a quick walk and a couple of meetings in the afternoon. I've noticed stress is quite high in meetings when I'm participating. However, you can see it does reduce as the afternoon progresses before my cycle ride back home.

In stark contrast, here is the day of the conference:

Conference Day

Let's go through some key times of the day:

8.57am: You can see a very faint blue bar (stress was at 25) which is when me and my colleague stopped for coffee so we wouldn't arrive at the location too early. Interestingly and somewhat amusing, we bumped into two other people from the Development Team who said “we thought we'd get a coffee so we can reduce the need for the awkward small talk” or words to that effect!

9.42am: The Operations Manager found me and told me of the stage that was in the conference room. Stress was at 72.

10am: The start of the actual conference. At this time, I was just sitting down listening to the first presentation. There was no reason that I could see for my stress level to be so high, apart from pure anxiety. An agenda was on the screen – my slot was after lunch (approx. 1.30pm. Stress was now at 78.

Here is stress zoomed in from 8am to 1pm:

8am-1pm Conference Day Stress

As I sat listening to the first set of presentations, you can see my stress However, I started to panic more and more. All the speakers where using a microphone, they knew all their words and presentation!

12:42pm: This was just during lunch, and the stress level was at 90!

1pm: A very slow walk around the grounds and a phone call back to my wife to try and calm me down.

Here is 1pm until 7pm:

1pm to 7pm Conference Day Stress

1.21pm: Sitting back down in my chair. Stress was now 97. At this point, it’s also worth noting that drinking alcohol can increase heart rate, and thus elevate stress readings. I did not have any alcohol on this day until 4pm.

1:33pm: My actual presentation, which I'll go into in the below section. Stress was at 96 and reduced down to 74 just shortly after I had finished. I cannot explain why it shot up just shortly afterwards though! I think the adrenaline was still pumping.

3pm: Break: Stress at 65

7pm: Home: Stress at 60.

Note: You will notice some high stress in the evening until midnight. This is because I had a few strong Belgium Beers. My stress level is not “stress” as such!

The Presentation

As I mentioned, all the other presenters were using a microphone. I've never used a microphone before, and certainly not to deliver a speech to over 150 people in a single room! This single element of using a microphone was probably a key part of my anxiety and stress. Let me explain...

When I prepare notes for talking or interview assessments, I write the full question or statement down – I don't summarise it with bullet points. When preparing my presentation, I put all my notes in a markdown file on my phone. At the time, I thought I was being clever so I could hold my phone at the conference and read from it. However, it was not meant to be...

I had to hold the microphone and talk into it, but not only that – I was given a clicker to change the slides over. So now, I have a microphone in one hand, clicker in the other – what about my notes where do these go? You may well ask (and rightly so) why did I not put my notes in the PowerPoint presentation and read them from the laptop? Well yes, I could have, however I don't like spoiling the surprise to those that may see my presentation before it's delivered, such as my line manager. I like to surprise, and especially as my slide deck didn’t contain many words – it was mainly AI generated images relating to my subject matter.

I got up to the stage, rested my phone on the laptop (after switching off the screen timeout!), and nervously held the microphone to my chest. The other speakers in the morning all had audio issues, and I was told this was because they were accidently pressing the mute button on the mic. So, I held it at the bottom, away from this mute button and off I went. Shaking.

I have no idea if the microphone was working at the distance that I was holding it at, nobody said anything (which they did to others) but I kept it still in the hope that I was picking up my voice so that those at the back could hear me. Throughout the presentation my hands were shaking. I tried to read my notes from my phone but I got lost in them – there was too much detail written down – I really should have done bullet points! When talking about an important element that I was keen for the company to hear, I missed off various information. I finished the presentation with a little “blowing of my trumpet” – which I felt uncomfortable doing, however all those extroverts in the commercial teams? They probably do this all the time! Meh... it's my time and I'm going to big myself up! 💪

Afterwards

At the next break a few people came and talked to me congratulating me on my presentation. This was really nice and did surprise me – I think they knew that it wasn't my “bag” or favourite thing to do. A close colleague that I've worked with for many years congratulated me, and when I told him how nervous I was and how I was shaking up on that stage, He said he couldn't tell, and I came across very well.

Another colleague I've known for many years by her own words admitted before the conference day that all presentations would be dull and dry, including mine and not to take offence by it. (She does have ADHD and also clearly admits that she struggles with focusing during these events). However, she said my presentation was the best so far, and it was my tone of voice and how I compared my subject matter to something the audience could understand. I was well chuffed with this! However, as you can see in stress charts, I was still wound up!

Bonus

After the last presentations had finished, the final “official” segment of the day was the Company Awards. These are awards for people who have made an impact, or long service. As part of the awards, there is always the “People's Choice” award. This is voted for purely by everyone in the company, during the conference. The management explained that rather than voting for someone from all 150+ people, they have shortened the list down to approximately 20 people who they think have made significant effort\impact\awareness\something to the company, and we were to vote from this list. Well, you can probably guess where I'm going with this... my name was on this list much to my surprise, and even more surprise was had when my name was read out! 😁

This was fantastic! I've never been voted for an award in my entire career! I don't know why I won, maybe it was that my presentation wasn't as dry as the others, maybe it was genuinely that I am doing a good job within the company – I have no idea! However, this is a bonus to the day, and allowed me to sit back with a strong sense of pride and satisfaction. Winning the “People's Choice” award rewarded me with a monetary voucher to reduce the cost of any holiday. Perfect timing ready for summer!

Reflection

The primary reason for writing this here was purely selfish – so I can remember the day, but the secondary reason is to explain to you and I that temporarily being uncomfortable can actually improve your state of comfortableness.

The whole day was awkward for me from the minute I got in the car to get to the event, to the moment I stepped back into my house at the end of the day. However, after a night’s sleep I realised that by doing this, by promoting my profile within the company I'm now more visible and also more recognised. Yes, of course, I'll still be nervous doing public speaking in crowds larger than 10 people, however it will be slightly easier next time, and the time after that.

Practice may not make perfect, but it does make it more manageable and familiar. Perhaps next time I'll be more prepared with my notes and put them in PowerPoint, or write them as bullet points? I'll let you know... 😎

#reflection #worklife #stress

Following on from my recent therapy sessions, it was suggested that journalling could be something that helps to offload any thoughts/concerns/emotions so they don't build up, and cause problems or issue later on in life. This page is all about how I got started.

Getting Started

I've always liked the idea of having a little pocket book to jot stuff down in, when in meetings or out and about, but I've never actually followed through on it. Living in technology, it's always so easy to jot notes on your phone or laptop into whatever app you use. Whilst this does have strong availability and ease associated with it, it just feels a bit too easy. Sometimes stepping off the well-trodden path can be challenging, however by doing so, it can be more rewarding as more effort is required.

For my journalling, I settled on a Leuchtturm 1917 A6 dotted notebook classic, and a Fischer Spacepen. I was given the Spacepen at an IT trade show many moons ago, but never really used it. Most probably as I mentioned above about digital note taking! I figured the spacepen would sit nicely in the pen loop on the side. Whilst I could have got a larger notebook, my requirements were to be able to take it with me as I go about my day, and write a little and often. The size of the notebook was probably the most important aspect as I didnt want to feel overwhelmed or intimidated with a large A4 notebook and all their large blank pages!

My journalling notebook with Fischer Spacepen

I also treated myself to the pen loop on the side. A little crazy that this is an “add-on” for a notebook, but I figured if I'm going to do this, I want to do it right!

I am now ready to journal! For the first few days I just treated it like a diary, and wrote some thoughts on how the day went. Was it positive? Negative? How did I feel? What went well? What didn't? This was fine, however I felt it was just becoming a “Dear Diary” which didn't fully meet my needs. I wanted to journal to help process my thoughts of the day and to do some self-help therapy on myself.

Searching the internet, there are so many articles on journalling, and some have lots of “Getting Started” journal prompts.

For example:

What would I thank my past self for?

or

Write a list of 10 things you want to be able to remember during bad days.

These looked perfect! However, I don't want to search online everytime I want a bit of help journalling... I needed a technical solution! My requirements were:

  • A simple web page with a button that presents a random Journaling Prompt
  • Prompts need to be unique
  • If not unique, then a counter should be shown to show how often it has been used.
  • A history record of prompts used should be shown.

Prompter

In my job role, I use PowerShell for a lot of automation and management of IT stuff, so it was only natural that I use this. I also have been experimenting with Powershell Universal at home with a view to see how and if this can fit into my workplace. PU is an amazing set of tooling to compliment PowerShell scripts. The key feature for me, was the Dashboards\App feature – being able to run PowerShell in a web browser. I'm no developer at all, so this suited me well, and allowed me to tick off my first requirement!

First off, I started collecting lots of prompts that looked roughly useful (I didnt over think it) and stored them in a CSV file, with some additional fields:

Prompts in CSV

I converted this file to JSON so I could easily update it programmatically, and also should I want to store more data against each prompt in the future.

Prompts in JSON

Great! Now I needed a proof of concept PowerShell script. This is generally how I program or script something – I just start with getting some visible results, and then build on it with error handling and all the other good stuff that I should do. This script however is just for me, nobody else so to be quite honest, I dont care to much about it's formatting or syntax!

$File = "D:\PromptHelper\Prompts.json"
$Record = "D:\PromptHelper\Prompts-Record.log"
$PromptObj = Get-Content $File -Raw |ConvertFrom-Json

# Get a Random prompt
$Prompt = $PromptObj | Get-Random

# format the history record
$OutRecord = "$(Get-Date -Format 'yyyy-MM-ddTHH:mm:ssZ'); $($Prompt.Category);$($Prompt.Prompt)"

# write the history record to the record file
$OutRecord | Out-File $Record -Append

# show me the prompt that was randomly chosen. 
$Prompt.Prompt
$Prompt.ID
$Prompt.Count
$PromptObj[$Prompt.ID-1].Count +=1

Now, time for PowerShell Universal and convert it into an App. I'm not going to go into the process of how I did that, that isn't important. That said.. here is the script that I'm running:


New-UDPage -Url "/Home" -Name "Prompt Generator" -Content {

    $RootPath = "C:\Scripts\PromptGenerator"
    $file = Join-Path $RootPath "Prompts.json"
    $record = Join-Path $RootPath "PromptRecord.log"

    $PromptObj = Get-Content $File -Raw |ConvertFrom-Json
    $PromptObj[$Prompt.ID-1].Count +=1

    New-UDTextbox -Id 'TodaysPrompt' -Multiline -FullWidth -RowsMax 10 -Disabled

    New-UDButton -Text 'Give me a Prompt' -ID 'TheButton' -OnClick {

        If ((Get-UDElement -Id 'MyCheckbox').checked){
            $Prompt = $PromptObj | Where-Object {$_.Count -ne 0}| Get-Random
        }else{
            $Prompt = $PromptObj | Get-Random
        }

        Set-UDElement -Id 'TodaysPrompt' -Properties @{
            Value =  "$($Prompt.Id) | $($Prompt.Category) | $($Prompt.Prompt) | $($Prompt.Count)"
        }
        Set-UDElement -ID 'TheButton' -Attributes @{disabled = $true}
        
        $PromptObj | ConvertTo-Json -Depth 2 | Set-Content -Path $File

        $OutRecord = "$(Get-Date -Format 'yyyy-MM-ddTHH:mm:ssZ'); $($Prompt.Category);$($Prompt.Prompt)"
        $OutRecord | Out-File $Record -Append

        Sync-UDElement -Id 'Table'
    
    }   
    New-UDCheckBox -Id 'ReusePrompts' -label 'Only prompts not previously used' -Checked $true

    New-UDDynamic -Id 'Table' {
        $SortedRecord = Import-Csv -Header Date,Category,Prompt $record -Delimiter ";" | Sort-Object Date -Descending 

        $Columns = @(
        New-UDTableColumn -Property Date -Title Date
        New-UDTableColumn -Property Category -Title Category
        New-UDTableColumn -Property Prompt -Title Prompt
        )
        $Page:Table = New-UDTable -Id 'history' -Data $SortedRecord -Columns $Columns -Title 'Prompt History' -DefaultSortDirection descending
        $Page:Table
    }

} -Generated

And this is how it looks when run:

Prompt Generator

Now, I can access this whenever at home, or remotely through my VPN on my phone, and it renders perfectly for any screen size. I press the button, and it provides a prompt in the top box, and then disables the button to prevent generating more prompts. Obviously a reload of the page re-enables the button, but this simple logic is enough to stop me generating a new prompt if I don't like the one I've been given. Also note the check box, that ensures they are not re-used.

Conclusion

It's been approximately one month since I started journaling – and I don't do it every day, but I do it regularly. When I'm stuck, I fire up my Prompter app, and use that to get me started. As the prompts generated generally have a greater reflection value on them compared to my usual day-to-day thoughts, I've also been jotting down in the front of my journal what prompt has been used, with a page number to my response. My hope is that I can quickly review this in months.

My prompts for reflection

Overall, I'm enjoying doing journaling – the element of writing stuff down does strangely help me process life and definetly does help me offload and wind down at those stressful moments!

Summary

I'm a bit behind on this section – it's nearly one month since the third session due to other aspects happening in my life, most notable taking a holiday after my marathon! Following on from session 2, the game plan moving forwards was to mix things up. Break the routine, and if need be – create a new routine for the next stage of life. This was the homework until the next session.

Feedback

As best I could, I did manage to mix things up, which also involved taking my youngest son out bowling in the evening mid-week! Yes, on a school night! Was a good night out – it shocked him somewhat, as he too is getting stuck into routines, and he like routine himself. This night helped up both I think. I have also started writing a few posts for this blog – which I've actually enjoyed doing. It's been a great way to offload aspects of my brain, allowing me to unwind more.

Session Three

Within the third session, we explored this aspect of “mixing things up”, and how it doesn't need to be the big gesture – sometimes the smallest and simplest things can make the biggest impact.

We also explored the aspect of fairness. I try really hard to ensure that I am fair to all the children as they grow up. Yet again, this has come from my parents and my upbringing, and has been engrained into my way of life. However, it is not always that simple!

Let me explain: – My first child was born during my first marriage, and the same with my second child. – Both these children's first few years in life was “traditional” as such – with a father and mother both caring for them. – My youngest child was born, and shortly afterwards the divorce and split of the family happened – All he has ever know is a split family – with his first 8 years or so, travelling between dad's home and mum's home at weekends.

Therefore, Child #3 has had a very different start to life than Child #1! There is no requirement to be fair to them all as they all grew up in different times. That said, there is a need to be fair in actions and care, but not specifics. E.g. I did X for Y, thus must do exactly the same for Z. They are not all the same, they are all different and have very different needs, however the love and support I give them is the same!

Another part of the session, we discussed journalling, and how this can help offload from my brain, thus giving me space to process life events. I've always been interested in this idea, but just never got around to it. Thinking about this, it makes sense, jot down how the day went, how I felt when specific events happened during that day, what emotions are present and suchlike. Now is the opportunity – I'm not a writer (as in with a pen), I'd much rather type – but this could be improve many aspects! My handwriting for starters, but also having a little book of thoughts and brain dumps.

Note: At this point, it should be said that this journalling is not to replace offloading to my sigificant other – that will still and always happen, however there are some days in her work when she is asleep before I finish, and that can continue for a few days in a row. This journalling allows me to offload some things during these “ships passing in the night” events, allowing me to deal with the up's and down's of life with improved patience and understanding!

Further Realisation and Reflection

Whilst swimming as part of my cross training for the marathon, I reflected on session #3 and what was discussed. I had that blinding light moment and I realised that this is actually my new routine! 💡I'm already carving this routine into my life as I'm getting older! There is no need to be in the house early to get the kids up and help them with their breakfast! For a start, there is only my youngest, and he is a teenager, so doesn't need my help like a toddler would do! So, this is it! A new routine being forged! I CAN go out swimming or running early doors!

Looking back at the holiday I had after the marathon, I had a great week, and I still mixed it up. There was of course the planned Spa day, however all other days were not really planned out at all. My wife and I spent the entire week together doing walking, pub lunches, and seeing family. It was great, and looking back – it was completely freestyle, and it didn't break anything! The routine is getting replaced with a dynamic-fluid routine that changes to life events!

#lifeevents #reflection

Following on from my Pre-Marathon Thoughts, here are my thoughts and outline of my first ever actual Marathon that I ran in early May 2024.

The Plan

I had a plan from the start. I work well with plan's, it what I do! My plan for this was to aim to run at 6:00/km pace and finish approx. 4h:15m. Hal Higdon was my virtual\paper trainer, and he stated it was important to not only train for running the distance, but also to train for the food and nutritional aspect of the marathon. Thus, in my long training runs, I've taken an energy gel every 11km (before I get tired). Thus, for the marathon I've packed 3x energy gel's and x1 emergency one. Additional to this, so that I’m not reliant on water stops, took a CamelBak of 1.5litres water with two electrolyte sports tablets in it.

Music playlist has been set up:

  1. DJ-Kicks: The Juan MacLean – This is my warm-up builder.
  2. 50 minutes of classic bangers from my training playlists. Including such tunes as “Bonnie Tyler – Holding out for a Hero”, “Beastie Boys – Body Rockin” and “Tori Amos – Professional Widow”. 😎
  3. Hans Zimmer – Live in Prague. This was deliberate to get into the final zone where it was toughest and run at my planned pace. This should take me to my desired time of 4h 15m, where I end the marathon running to “Time” from “Inception” ⌛
  4. Susanne Sundfør – Ten Love Songs – Over-run music. Similar to my training runs, this is what is played when the album\playlist finished before the training run did.

The Night Before

Urgh – I had a terrible night’s sleep! – Hal Higdon is right – get a good night sleep two nights before the marathon! He says that you'll be anxious, so it's even more important to get that sleep and rest in beforehand. I did however have a great meal out early evening – mac cheese, chicken and flatbread washed down with a couple of 0% abv lagers. Rest of the evening was just the usual fruit teas for hydration, but no caffeine.

The Morning Of

I woke up very anxious indeed. The poor sleep did not help at all. My breakfast was porridge with raisins\cranberries, a milkshake, pain au chocolate and black coffee. On reflection, I probably should have had more, however my anxiety that I'd be too “heavy” running prevented me doing so. Going to the event, the weather looked perfect – it was dry, slightly overcast with low wind. Temperature was approx. 12°C.

Arriving at the event there was the usual parking hell of any sportive. Although we had booked a parking space, it appeared this didn't matter, and we were told to park elsewhere in the car park! This really didn’t contribute to reducing my anxiety! We parked up with 40 minutes to go. Had a few text messages from friends supporting me, which was ace and felt good. My good friend called my wife and told her that he would meet me at mile 20 to support me on the last 6 miles as that is the toughest part. I had very mixed feelings here – firstly, what a great thing to do, supportive, and the right thing to expect from your friend. On the other hand, I just wanted to get into the zone, run the distance, quietly and in my own way with my own music as I had planned! However, I accepted this news, but was also quietly pleased that I would get some company for the last part. After a quick toilet stop at the ever-so-crowded MacDonalds, it was off to the pen relevant for my estimated finish time.

What a lot of people! The event also had a half-marathon running at the same time, so in my pen, I was mixed with half-marathoner's and full marathoners. I got talking to a chap and an elderly couple who were doing marathon and half-marathon respectively. The chap who was a marathon veteran said the best advice is to start slow and go slower. I get it – this what I was thinking all the time in training – maintain the desired pace throughout. My 20 mile runs in training were done solidly at 6:00/km, and to me, this was a “slow” pace. Previous running the year before, I was doing between 5:15/km and 5:45/km for runs up to 15km, so to me – I was running “slower”!

The Marathon

For the first half of the marathon, I was feeling great – I set up a Pace Pro strategy on my Garmin watch, to make sure I didn't go too fast throughout, however with the atmosphere and the fact that I was running with a load of “half-marathoners”, I did go slightly faster than I should. (See below actual pace vs PacePro of Garmin). Those that were running the Half, were obviously running at a different pace to me – I really should have slowed down more! My wife and youngest son were there to cheer me on, they managed to get to two different places on the course, which was lovely to see and have their support cheering me on!

PacePro vs Actual Pace

The second half of the marathon was definitely tougher. There were less supporters (as most had done their part for the Half-Marathoners), and the road and route was clearer with bigger gaps between the runners. My friend did join me at Mile 17 or 18, which I recall was fantastic at the time – so nice to have the company and support with me. I'm not sure when, but at some point after mile 22, my upper thighs started to burn so much. Every stride forward was agony. For the last four miles I ran and walked repeatedly – there was no way I could run (even at a very slow pace) with this pain.

Walk vs Run

In the last two miles, I consumed my last emergency energy gel, and pushed forward. With the finish in sight (under 1km), I continued to run as best I can, as my friend dropped off to meet me at the finish line. I reached the finish line at 4hr 39min. Not my desired finish time, but nevertheless – a completed marathon without injury! Next stop... Food!

Milton Keynes Marathon 2024

Lessons Learnt

Treating this like a business incident\exercise, there were several lessons learnt should I do this again, which there is a strong chance I will do!

  1. Get a good night’s sleep. Ideally don't stay in a hotel in a city centre with nearby nightlife going on, and if you do – spend some extra money to get one with working air conditioning (so don't need to have windows open, thus less noise\disruption).
  2. As the chap said at the start – start slow and run slower. If there is a half marathon happening at the same time, understand that those runners will be going a different pace to you.
  3. Post training runs, I feel I neglected my thighs as part of stretching and cooldowns. I had no issues with my calves, but I did spend more time on them making sure they were properly maintained.
  4. A longer training plan. Although I was running before I started the 16-week training plan, I feel having a longer plan may have prepared me better.
  5. Either do it completely solo, or with a friend the entire way. Although I massively appreciated my friend’s gestures for running with me, going through the pain together, for the entirely would have been preferable as we would both experience and support each other. Just not the middle ground!
  6. Change the music for the first two hours so it's more regular and calmer. My original plan was for Global Underground #24 – Nick Warren however, I changed it last minute. Not sure this would have made a difference or not, but it's a change that I would make should I do it again!

Reflection

Overall, although I didn't hit my target of completing it in under 4h 20min, I did meet my first aim of actually finishing it. Also, there were some things that I am glad I did – such as booking Spa Day for the day afterwards! Hydrotherapy pools with underwater jets are amazing for sore leg muscles! Additionally, I managed to raise over £1500 for the British Heart Foundation

Although I said to myself, I won't do this again, if the truth be told – I most probably will do, and I may even do the same course this time next year. My friend who joined me said he would happily do this course as his first official marathon, so if he books in, I'll join him.

Note to future self – read this!

#marathon #lifeevents #reflection

Recently, I've invested in a GivEnergy All-In-One (AIO) battery and Gateway to be smarter with our energy management with an aim of reducing our bills. This article documents the initial installation, with the main focus on the security configuration of the devices.

NOTE: I'm assuming an element of technical knowledge from the reader here, this is by no means a HOWTO guide.

Installation

As part of the commissioning, the installation engineers requested my wireless network details so the devices could communicate back to the GivEnergy Cloud, and once this was done – the only advice I received was to “change the password of the portal for the devices”. So, better go and do that first then!

First, I needed to find the IP addresses of the devices. Both the AIO and Gateway connect to the wireless separately, so I was looking for two IP addresses. I run Unifi Access Points and Switches at home, so this was a breeze. Once found, dropped the addresses into a browser and logged in with the default credentials (admin\admin).

GivEnergy logon credential configuration

When changing the password for the admin account to the devices, first thing I noticed is the clear text password field! 😱 Argh! oh well, it is what it is.. better get it updated first.

Lets have a look at the settings and menu choices we have:

Mode Selection

This appears to allow us to change between AP and STA mode. Noted. Looks good so far, its on STA mode.

GivEnergy Working Mode Configuration

AP Interface Setting

This enables the Access Point of the device for configuration. Hmm, looks concerning, but the previous setting looked like this wasn't enabled? Maybe it's not so bad?

GivEnergy AP Interface Setting

STA Interface Setting

This allows us to configure the device onto the home wireless network. Sigh, more clear-text password fields, but OK – let's move on.

GivEnergy STA Interface Setting

I'm not a security professional, so there maybe more issues present, but these concerns jumped out at me here:

  1. All password and SSID passphrase text boxes were clear text.
  2. AP Interface Security Mode is Open by default!
  3. If I was being really picky, it's HTTP only, and no HTTPS.
  4. I noticed TELNET (not SSH) was open. (I'm going to dig into this in the future)

Digging deeper into these as part of improving the security stance, I discovered the following:

  • The device password field is 20 characters maximum length
  • The SSID passphase fields is 63 characters maximum length
  • The 'Hidden' tickbox on the AP Interface appears to make the AIO\Gateway unavailable in the GivEnergy Portal
  • If changing the AP SSID name, it also appears to make the AIO\Gateway unavailable in the GivEnergy Portal
  • The Mode Selection between AP and STA does NOT disable the AP SSID! The AP SSID was being broadcasted, no matter which option was set!

In terms of basic security, this is sub-optimal. In summary, if I was do not do any configuration (as a basic consumer) a bad-actor could connect wirelessly to the AP mode of the device, and browse my network for other devices to exploit\pivot\etc or use my bandwidth for free. I repeat, SUB-OPTIMAL.

Improving Security

Firstly, I configured the GivEnergy devices as best I can taking into account all of the above, which involved one solitary, but important step:

  • Encrypting the AP Interface mode with a strong password and WPA2 encryption. This did NOT break communication back to the GivEnergy Cloud, which was nice 😉.

My house runs Unifi for Access Points, Switches and Camera's, and I love it! As part of this configuration, there are a number of key VLAN's configured:

  • Management (Wired devices and native VLAN)
  • Wireless (Single SSID associated with it)
  • Security (Camera's, NVR etc)
  • DMZ (NextCloud instance)

Although these devices were only on my Wireless VLAN, I was still very uncomfortable with this, as this VLAN is used heavily by everyone in the house and has a lot of devices that connect to it. These are the steps I took to improve security:

Within Unifi: 1. Create a separate Wireless Network called “Energy” 2. Create a separate VLAN and publish the SSID only to that VLAN. 3. Create a WiFi Speed Limit profile and attach to the new “Energy” network. 4. Enable MAC Address filtering on the “Energy” network for only the GivEnergy devices

At the firewall: 1. Ensure only HTTP, HTTPS, NTP, DNS, and TCP/7654 could access outbound network. 2. Ensure no traversing of VLAN's was possible from “Energy”, but allow “Management” access to “Energy” for configuration of the device portals.

Note: There is probably more I can do here at the firewall level, but leaving that for another article. I'd like to understand what traffic goes out, and to what IP addresses, and lock it down just to accessing those ranges if possible.

Pitfalls

When configuring these improvements, there were a couple of issues that tripped me up! When configuring the wireless settings on the Gateway, I accidentally changed the SSID or password on the STA mode setting (see point #1 below), and ended up locking myself out of the administration portal for that device. I learnt some things here:

  1. Don't drink beer and watch TV at the same time of making critical configuration changes 😉
  2. If wanting to use LAN instead of WiFi, you can do but you need to change DIP switches on the side of the Gateway. I never got this working, it could have been related to point #1, or just me being impatient.
  3. Not sure what triggered this, however after some time of trying to get access through the LAN, the wireless module on the Gateway reset itself. So, I reconnected in via the AP mode with OPEN Security (and a reset admin password) and re-configured everything again. Not sure if this was the changes of DIP switches to enable LAN, or the fact that the WiFi module couldn't connect to the provided SSID.

The manual outlines the DIP switches settings if you need to understand them and it appears like you cannot have both? Makes me think – If in LAN mode, will that disable the AP Mode Setting? All of point #3 is one to understand another day, and document appropriately.

The mobile app has two settings – “Home” and “Away”. Home connects locally via IP, whereas the “Away” connects via the GivEnergy Cloud. Now that the GivEnergy devices are isolated on their own VLAN, the Mobile app does not find the devices when at “Home”. This I assume is down to broadcasting traffic and inter-vlan traffic being blocked. It's not a major concern, as you can still see data when “Away” and the portal still works fine. Again, another one to investigate for another day.

I'm not 100% happy with this configuration, however it was fine for now, and I I was keen to get the cost saving elements configured up so I can start saving money! I plan to revisit security of these devices with a view on LAN configuration, disabling the AP mode completely, and further investigation into secure firewall configuration.

Energy Configuration

Now that my devices are configured up, and more secure than “factory default”, it was time to turn my attention to ensuring maximum cost saving, and configuring them to charge during the lowest tariff period of the day.

I decided to use Octopus Agile – a beta smart tariff in the UK that provide access to half-hourly energy prices, tied to wholesale prices and updated daily. The plan was to charge my battery up at the cheapest point, and draw from the battery at the most expensive time of day (usually between 4pm – 7pm).

Example Octopus Agile Tariffs

You can see the historical data of their tariffs here here

As there was no information provided to me by the installer's, it was down to me on how to figure how to configure the charging schedules. What I learnt here, may help others to reach the optimum process quicker!

Mobile App

First, the obvious place to look, I discovered I could set a single charging period per 24hr period via the mobile app. Not a bad start, however, it's only a single charge period and checking the historical data from the above website, I can see that these time periods change daily, and more noticably at weekend periods.

Mobile - Setting Charge Period

GivEnergy Portal

Once you've found the method of configuring this, it allowed for more charging period, however they are still static time periods. For note, the way to do this is go to My Invertors > Remote Control on the All-In-One:

GivEnergy Portal - Remote Control

Once there, you configure the time period that you want it to start and end charging, along with maximum percentage. It's got a weird read and commit process on each setting. There are 10 time periods you can have, and you can also set discharge periods.

Setting GivEnergy Charging Periods

Still this isn't dynamic! What else?

HomeAssistant

I'm also running a HomeAssistant installation at home, and perhaps this can do it with it's automations? Yes it can, however it isn't by any means simple. Firstly, it requires the GivTCP Addon installing in HomeAssistant. That addon then needs access to your GivEnergy devices, which if you recall from above, are on an isolated VLAN to my other devices. So, some Inter-VLAN firewall-rule-hackery takes place, and they can now see the devices. Next hurdle is there appears to be some bug around pulling data from them.

I gave up here as after some more internet searching I found the following solution!

Octopus R&D Labs

Octopus Energy have an R&D Labs site that utilises the GivEnergy API to schedule charging at low tariff periods! Fan-bloody-tastic!

Configuration was actually quite simple – get the API details from GivEnergy Portal, create a Device Group and add in your GivEnergy Devices. Also, put in your Octopus Agile API details (found within My Account in your Octopus Portal).

The R&D site also includes a number of guides on how to configure charging based on lowest tariffs:

Octopus R&D Labs Guides

Following the guide was simple, and so far after a few days I can say that this appears to be working as intended.

As with any pre-release\beta software, we should always be mindful that features can change, but so far – this looks excellent. A key point to remember (and it does tell you this during configuration) is that this will overide any configuration set in GivEnergy, although the time periods may still be set.

Conclusion

From this, we've learnt a good deal:

  • It is highly insecure as factory default, and MUST be secured!
  • Some of this security configuration can be daunting for a non-technical person. Always see advice from a trusted techie\expert to help 😉
  • I'm not affiliated with Octopus, however their R&D Labs made the scheduling so simple. This ideally needs to be better advertised\outlined when signing up to their Agile Tariff.

I'll investigate into the networking aspects further in a separate post, as I am keen to access these devices via LAN cabling. From reading the Givenergy community forum posts, it should be as simple as flicking a DIP switch.. but that didn't work so well for me! This said, I need to run some cabling in the house first before I get to make this happen.

#GivEnergy #Security #Energy #Technical

Background

Over the past year, I have been supporting a client with a multi-tenant application focused around data analytics. This article outlines some of the hurdles that were faced around Azure SQL Server when applying Defense in Depth security principles.

Note: The architectural decisions around the multi-tenancy aspect of the application was not designed by me, nor is is not under my control. My remit was to secure the application without significant design changes.

Architecture

This is a simplified architecture diagram of the product, with only the relevant elements included.

Basic Architecture Diagram

Data is pulled from the customer environment and stored within Azure SQL Server in the provider's tenancy. A PowerBI report is published to the customer's PowerBI App Service, which reads and visualizes data from the Azure SQL Server.

Problem

As part of regular security screening using the CIS Microsoft Azure Foundations benchmarking, it was identified that the Azure SQL Server had the following tick box enabled:

Azure SQL Server Exceptions

The documentation from Microsoft explains further:

Microsoft Learn Extract

When looking at this rule within PowerShell, it shows as 0.0.0.0/0!

Azure SQL Firewall Rule

Although the Microsoft Learn documentation says this is ONLY Azure IP Addresses, as a network guy, this screams to me says “everything”. Because we are applying defense in depth techniques (thus we cannot rely just on authentication alone), and this database contains customer data (with potentially PII within it), this checkbox needs to be disabled. Easy! ... well.... is it?

Options

Keeping in mind the requirement to have it as a multi-tenancy application, there were a number of options available (in order of least effort):

  1. Programmatically get the PowerBI IP Address Ranges, and populate into ASQL Firewall Rules. Then disable “Allow Azure services and resources to access this server”.

  2. Migrate the Azure SQL Server (ASQL) to Azure SQL Managed Instance(MI) or a Virtual Machine (VM). Apply a PowerBI Service Tag to the associated Network Security Group(NSG).

  3. Implement Azure Firewall, connecting to a Private Endpoint on ASQL. Include an NSG on the Virtual Network to only allow IP's from PowerBI using it's Service Tag and\or put the Service Tag within the Azure Firewall rule.

  4. Install Data Gateway in the Customers tenancy on Virtual Machine with a static Public IP address. Whitelist this IP Address in the ASQL Firewall Rules.

  5. Use Virtual Network Gateway between customer tenancy and the service providers tenancy.

Let's run through these:

Option 1: Programmatically get PowerBI IP Address Ranges

This option involved getting all PowerBI Address ranges, and programmatically putting them into the ASQL firewall. This felt like a good solution – although not perfect, we are dramatically reducing the attack surface to just PowerBI App Service.

Firstly, before we write a script – let's get the JSON file from here, and then see how many address ranges there are for PowerBI to see if it's viable:

PowerBI IP Address Ranges

Well, this won't work, as there is a maximum of 256 rules for Server-level IP firewall rules.

Result: ❌ Rejected

Option 2: Use Virtual Network Gateway

This was a wildcard option – it would require work on the customer's tenancy side, and thus it was rejected quite early on, for the reasons that we should solve this project and secure the database with as little effort on the customer's tenancy as possible.

This involved using the Virtual Network Gateway and creating the relevant rules in the ASQL firewall to allow access from the customer's tenancy.

This article outlines more about how to achieve this, however for our requirements, it would be slightly different to their first diagram.

Result: ❌ Rejected

Option 3: Data Gateway in Customer Tenancy

Although this would technically work, it's additional resource that would require maintenance and additional resources at the customer's tenancy. For this reason alone, in a similar vein to Option 2, it was not a viable option for this issue.

Result: ❌ Rejected

Option 4: Migrate to Azure SQL MI / VM

There was feeling that this would work, however we were keen to move to PaaS services where possible. It was felt that running SQL Server on a VM would be a backwards step, as all the maintenance elements of running an VM would be required. The Managed Instance option was also possible to reduce these maintenance aspects, however it did come at quite a high cost. At this stage this option was rejected, how it was noted that this maybe revisited at a later time in the project.

Result: ❌ Rejected

Option 5: Implement Azure Firewall with Private Endpoints

This looked the most promising and following discussions with Microsoft, this was a strong contender, despite it not being their recommended solution. A Proof of Concept was set up in the following design:

Azure Firewall PoC

This enabled us to access the database through a new hostname – customer.saasprovider.tld. When using SQL Authentication, it was important to pass the username including the server name:

sysadmin@customerdatabase.database.windows.net

To ensure only PowerBI endpoints could access the database, we added the Service Tag for PowerBI to a rule within the NSG.

It did not work! 😱 We had a suspicion that this was because the traffic was originating not from one of the IP addresses contained within the Service Tag. We were close.... When checking the documentation it stated that “Note: does not include frontend endpoints at the moment (e.g., app.powerbi.com).”. When raising this with Microsoft, they did not confirm or deny this, however they did outline the main cause of the issue. As the traffic was traversing Azure Firewall, it was being NAT'ed, therefore the source IP address accessing the ASQL private endpoint was not that of PowerBI, it was the internal IP address of the Azure Firewall! 🤯

Microsoft's suggestion was to add all the PowerBI IP address ranges into an Azure IP Group, and bind that to the relevant rule in the Azure firewall. We dutifully created an Azure Automation job to get the latest IP Addresses from the JSON file in #1, and populate an IP Groups object with said IP address. This worked a treat, however it was noted that Azure Firewall and IP Groups (to date) does not support IPv6 Addresses.

This finally looked like it was going to work! We started to plug in our PowerBI test report into the PoC and started configuration. Authentication was failing during the publishing phase from PowerBI to the PowerBI App Service. It appeared that when passing in username@customer.provider.tld failed, as ASQL had no understanding of that username. ASQL was expecting to see username@customerdatabase.database.windows.net, however we couldn't pass that username through as the ASQL Server was not available on the public internet!

At this point, we also discovered that even if we did run SQL on an Azure SQL Managed Instance, we would have exactly the same problem around authentication, thus option 4 was rejected again (and with confidence it would not work, even if the budget was appoved).

Result: ❌ Rejected

Next Steps

During the conversations with Microsoft, they suggested two more options:

Semantic Model Sharing although appeared to be a viable option, the lack of automation options around our DevOps deployment processes along with increased support administration also made this not a viable option. This is because we would have to share the semantic model with specific users, and thus they would need to be created as Guest Users within our EntraID.

Conclusion

This investigation took approximately six months, and included multiple scripts, proof of concepts and conversations with Solution Architects from Microsoft to help us meet the requirements of allowing access to Azure SQL Server in a multi-tenancy environment.

After discussions with senior management, the final result and conclusion was that the product should be re-architected so the report is published to the provider's tenancy, with a strong possibility of using PowerBI Embedded to surface this data via a web portal. This re-architecting is currently underway and being designed appropriately. In the meanwhile, we've learnt a huge amount of knowledge around how to secure Azure SQL Servers, which will do us in good stead moving forwards as other services and products are migrated from SQL Server on virtual machines or on premise. Hope you have learnt something too!

This is Part 2 of my Mid-life Changes posts.

Summary

As noted in Part 1, I've taken steps to improve life based on recent events. I had recently been feeling restless and constantly bored, despite my day filled with work, a side hustle, and house-chores. I've just had my first therapy session, and its opened my eyes to what is going on!

I discovered that:

  • Up until now I have had a job – providing for my children, and now two of them have successfully moved out of home and are now mostly providing for themselves.
  • Stemming from the custody battle, I have subconsciously believed that I must prove myself and my abilities.
  • Everything that is done – is done for the children first, and me second.

Routine

Ever since I was a child, I've been a subject of routine. Two examples from when I was child that I strongly remember:

  • Always having a walk\cycle in the park on Sunday afternoon
  • Sitting round the table with my siblings eating tea at 5pm

Then, when I became a father, further routine was suggested to me by my parents. Not in a negative sense, but always with the words “a routine is so important for babies” etc. For example:

  • Having a walk in the afternoon
  • Tea at 4pm-5pm
  • Then bath-time at 6-7pm
  • Then a story in bed, followed by lights out.

Why did we do this? All to teach the baby good routine, so there are no surprises, which overall calms them and thus teaches them to sleep well at night. There are other examples, but this is easiest and most common.

Currently, I'm still very fixed in routine to this day. For example:

  • For the past 6 years, I've nearly always gone to the local coffee shop at 2.45pm on a Thursday. 🤷‍♂️
  • Wanting to know what tea is (either today\tomorrow\whenever) so I can plan my lunch and not have the same food group such as bread\pasta.

Session Two

This became the focus of my next therapy session. Without intention, we revisited the custody case and how this was a trauma event in my life.

The following was put to me:

  • If there is a routine and I do the same things repeatedly, then I believe I won't ever reach the trauma event and I'll be safe.
  • However, if I do stick to the routine, and something unexpected does happen, it will throw me off balance, potentially causing a bigger impact.

We discussed that whilst a routine is an excellent strategy, it doesn't work for me currently. I've been doing the same routine for years (caring and parenting children), and now two out of three have left home, thus leaving me a void in my life, with my routine now thrown out as such a large part of my life is not the same anymore. It's changed.

The Plan

It is now more important than ever to mix things up. Break the routine. Practice being uncomfortable. The more I practice this, the easier it will be should something unexpected come around the corner.

I found something oddly strange about this that I discovered in that second session.

The session took place on a Wednesday afternoon. Currently, as you may see from my other posts, I'm currently training for a marathon. My training is usually early morning approximately 6-7am, and on Monday's I do swimming as this is gentle cross-training the day after a long run on the Sunday.

On the Monday before therapy, I didn’t go swimming between 6am and 7am, I thought I'd have a rest day instead. However, at 7pm that day, I felt that I missed it, and I'll go swimming at 8.30pm (adult swim lanes) instead. To be completely honest, I was quite anxious over this. Parking was more difficult as the usual crowd were not there at 6.30am, when I looked into the pool it looked busier than 6.30am – There were people splashing, messing around and generally having fun! I thought to myself that I just need to crack on with this and go swimming. It's what I'm here for. I did – it was excellent, and I had a great swim! 🏊

So, this was naturally discussed as part of the routine conversations in session #2, and how although there was initial anxiety, nothing bad happened, and at the end of it a good swim took place!

So, the plan is all about mixing things up. Learning how to do different things, much like when changing job roles or companies.

My homework was then set for the following week, which was to mix things up. Break the routine, do things differently, don't plan – just do! This is also where this blog came from. I've always wanted to write a blog, mostly focusing on technical problems that I've come across in my life, so I though – why not just start? So I have, and here it is.

#lifeevents #reflection

Recently I've been noticing elements in my life that need some additional focus to improve overall quality of life. This is the first of a few articles about that specific journey, as it happens.

History

In 2010, after nearly 7 years of marriage my wife (at the time – obvs) had an affair, and left me for her old school friend. I was working full time, and she was a “stay at home mum”.

Read more...

I'm mid-40's. Life is changing dramatically (all for the good!) but I'm finding it difficult to process some of the events and elements that are occurring.

In addition to this, I've always wanted to write some technical notes based on some of the various problems or projects that I come across within my role in IT.

Read more...

Enter your email to subscribe to updates.