12V LiFePO4 UPS using Arduino in progress

What I’ve been up to 2017 – 2019

Well – golly – WordPress has changed a LOT since I was tinkering it with it a couple of years ago. Going to take me a while to adjust to the new editor. My blog sits on an Ubuntu desktop VM consuming 2GB on my main ESXI server no matter what the demand level is (or isn’t). I really need to move that to a container now. Containers are fun.

2017: I wanted a programming job. And I found one.

Early 2017 – further development of my Home Lab

show more

Let me step-back … first half of 2017 after I decided I wanted to be a programmer. Despite being in my mid-40’s I decided to go for it. And to help me explore technology, I wanted a better home lab.

At home I’d been learning … more about WordPress (using PHP and mySql) with IIS and Linux servers, some AVR/Arduino stuff for IoT projects, .Net framework and what goes on under the “syntactic sugar”. I’d focused on C# and Visual Studio, but also used Visual Studio Code where it seemed appropriate (especially on Linux). I’d set-up ESXI (having previously used just VMware workstation – since 2006 I think, Hyper V and occasionally virtualbox etc), pfSense making liberal use of tightly controlled vLANs + IDS etc. and found it’s built in load balancer handy for my reverse proxies with SSL hand off to internal PKI using IIS server farms, and Let’s Encrypt. XCA is great for managing internal PKI (in previous years when I didn’t know better, I’d set-up Windows Server just to have an internal Certificate Authority!).

Then there was hMailServer (on Windows) … also setting up dmarc, dkim and spf: a simple DNS practice that still seems to escape many companies! I scripted back-ups, making PRTG sensors for them. And I sat a couple of relays behind my pfSense load balancer too.

Oh – and of course I started using Git for the first time! Used the GitHub API for my website. First time I used Gulp (task runner in general). My JavaScript was really bad for my website … I’ve moved on a lot since then, but at least it was a start – and starting to become aware of ES6 at least and giving me an introduction to Node & NPM on Linux and Windows.

Why?

I’m a tinkerer. I like to learn. I like control (in an anxiety defeating way). I like to help friends with my skills. Sure – I could have done everything I wanted in the “cloud”. And arguably the skills learned would be valuable. I feared lack of control and visibility over processes. That my ignorance might leave me (and my data and friends’ data) more exposed. Also, my breadth of interests could end-up being expensive. Along with hosting all my data. I feared not having total control over some processes where outside influences – e.g. creating additional traffic, might cost me beyond my immediate knowledge and control. It seemed easier and more fun to have a home lab that also provided friends’ email and websites.

Since then, a little experience with Azure has helped mitigate some of my fears but it’s still cheaper I think to do as much as I like to on home equipment.

I do use a lot of internet bandwidth! (A lot of that will be CCTV).

Screenshot of BT internet usage as of 1st December 2019 and showing monthly usage is 1805.98GB

Issues I had / have

show more
  • vLAN connectivity upstairs. I was trying to get LAG working for example using different media, but on my budget I couldn’t find equipment that worked for me. I now just route cable outside the house. I spent ages at the time researching and trying to think of a way ’round it, but to no avail.

  • Dodgy smart switches. I loved the tp-link SG2008 smart switch and had two (three now) but when I wanted more they were no longer available. I had to buy later releases; lesser models. And they didn’t like power glitches, losing settings. (Later I learned what was actually going on). And so for those in particular, and all in general, I wanted UPS’s that’ll sustain them at least through brownouts and preferably for some time, as my IoT infrastructure depended on them. My *fix* was to use some Ravpower power-banks that, at the time, supported pretty good pass-through. I knew it was a temporary measure as the LiPo cells wouldn’t appreciate being held at full charge for years on end and that I was gambling they wouldn’t go catastrophically wrong. I’ve heard LiPo cells can hold as much energy as an equivalent mass or volume (not sure) of dynamite and in these smaller things they’re not designed to last that long. It made me uncomfortable. I used the same technique for Raspberry Pi’s and other 5V, 9V and 12V low powered devices like cameras that might fail (crash) during some brownout conditions and where I wanted to record activity in case blackouts were malicious.


    Later in 2017 a firmware fix came out for my smart switches that I’ve now applied but only took the opportunity to check again this year, 2019!

  • Back-ups. Where do I back-up to? VM backup, content (database + file) backup. Without costing lots of money anyway. I automated the most important data but didn’t have enough resources and time at that juncture to automate big back-ups. Trying to make the most of my Acronis and Office365 subscriptions.

  • Documentation. Still a major issue. I’ve gone through so many changes, documentation is hard. I like the Agile suggestion – applying the philosophy of “working software”, but I have a lot of smelly ToDo’s to manage too.

  • Remote management. Eventually I updated my hardware for my pfSense box that I deliberately wanted to keep as separate hardware with a lot of breathing space, to a SuperMicro board with IPMI and have a fail-over separate router for 4G access.


    But, for my ESXI server (servers now) I had deliberately chosen a Q87M-E motherboard for a Haswell I7 4770 as the combination was relatively low-energy-consuming at idle but specifically for the good vt-d support, vLAN supporting ethernet port, and vPro and AMT (Active Management Technology) that together with partnership with VNC gave me the same sort of remote access to the board as you might find on a server board. It worked for a while – back in 2014 I guess, but by 2017 the AMT had mysteriously stopped working properly. 🙁 I couldn’t figure it out back then and put it down to some corrupted boot loader of some sort. The rest of the board is still going strong! The h100i Corsair cooler is making a lot of complaining noises now though. Outside of it’s 5-year warranty.


    I can’t afford decent server technology. If I replace this machine in the near future I’d probably go for an AMD 3950X and still have the remote management issue (might go for proxmox instead of ESXI though for the “next one”).

    There don’t seem to be any decent, deeply programmable, brownout/blackout intelligent, remote controlled sockets available commercially (outside of very costly industrial control). That aren’t proven hackable on WiFi, and that are 100% state reliable with proper feedback. I’ve tried a number of them but no go. So even “turn it off and on again”, reliably, requires me to build stuff myself!

  • Electricity guzzling. I’ve accrued and deployed a lot of kit. Sure – it includes Pi’s, a nuc, compute sticks etc. and general low-power stuff, but it all adds up. I want to be more efficient but reliably. I’d love to buy more Atom server boards if I could afford it. If I send a machine to sleep, I want to be sure I can remotely wake it up again. I’ve had issues with some boards in the past with their S5 states for example. Or even when at home, I don’t want it waking up by itself in the middle of the night etc. I want to manage electrical devices generally more efficiently on a demand basis. I’d hoped “IoT” would be the way forward but commercial stuff isn’t quite there for reliability yet – if I want something, I have to do it myself. Details later.

  • UPS’s. (Mains – not 5V, 9V, 12V things). I had very little money. I wanted a line interactive UPS with sine-wave output for my server. Electricity is quite bad in our house – surge protection dies quite quickly. I’d bought a Powerwalker VI 2200 (as it was back then). I tried for some time interfacing it with Pi’s and my pfSense box even etc. let alone ESXI … but I failed. (I use APC now for good NUT support). But as it was… “AS IS”, there was no way of shutting down ESXI gracefully in a blackout. Creating a NUT master on a Pi that could detect the blackout and run separately on batteries remained as yet another ToDo.

There were lots of issues. More have developed. My PRTG email backup sensors have stopped working properly – they work when initiated manually but not automatically with Windows scheduler so another permissions issue has arisen I think but I can’t yet “see” quite where. It can wait. (Update Dec 23rd … *fixed* that finally).

For some reason TimeOutTherapy WordPress site suddenly became a lot slower after a WordPress or Windows update earlier this year and I haven’t had time to work that one out … it’s just not a priority – as eventually I’ll be moving it from Windows to a Docker container anyway.

I was maintaining a lot of VM’s as pets, a lot of software/configuration, a lot of vLANs, a lot of hardware … I hadn’t chosen a trivial thing to attempt. I feared fire from all the electrics and batteries – especially if not monitored/controlled when away, theft (encrypting everything is another issue with ageing equipment and I have a lot of accrued data to manage somehow), equipment failure and hacking.

show less
show less

August 2017: Debt Recovery Plus

show more

A very nice agent for Venn Group seemed to understand me better than most and helped me get a sort of contract / trial role at DRP (it became permanent in December 2017).

DRP had a couple of tests for any potential candidates. I can’t remember the specifics now, but there was a fairly substantial test that I think included saving blobs in MS SQL Server (deliberately) and some kind of Winform front-end for it. I think. It wasn’t that trivial, and the first time I’d used SQL Server and T-SQL (and stored procs) in any kind of substantial way. Previous heavy database experience was Access back in the 90’s and 00’s, some Lotus Approach, and casual mySQL. Much of it laughable to serious database people though I’m quite proud of some of the stuff I did in Access. Anyway, I submitted that test using Git I think – a link to the repository maybe. That opened the door to an interview which included very basic C# programming tests along the lines of “what does this code do” etc. and “what’s the problem with this code”. Woohoo … I was in.

I’d imagined Debt collection – especially when related to the much maligned traffic warden, to be a rather horrid profession recruiting horrid people. But actually, everyone without exception at DRP were nice people! And very professional – and transparent and honest. I was amazed really. Still – it wasn’t an ideal set-up for me … too cramped an office, rather warm & noisy … but I was determined to make a go of it. NB they’ve recently moved to more spacious offices (after I left).

Unfortunately the Senior (and only other) Dev left within a few months of my starting (nothing to do with me!!!). He seemed very experienced and competent to me and hadn’t been there long himself, but decided to move on to new challenges and was never replaced.


Without any other Devs and being my first Dev role in what was, for my autism, a challenging environment, I just had to do my best. While still there, the Senior Dev helped me get to grips with StructureMap for IoC (though not used it since!) [IoC is built-in now for so much stuff in ASP.Net Core]. One of the first things I tackled was an ASP.Net / MVC / SQL Server / Razor + Bootstrap browser-based internal application using Active Directory authentication. I forget details now – it used entity framework (pre-core) too but I forget … err … no I remember – that was Code First and I used some special tooling that the Senior Dev had developed over time. Owin, … and … hmmm … MixedAuth I think though I can’t remember why! My colleagues seemed reasonably OK with the app – it fired emails depending on actions selected and stored content and meta-content in the database, and lead the User through various states (I was inspired by some elements of SAP that I’d used in Manchester City Council as a purchaser years earlier). It was a steep learning curve although having had home experience with JavaScript, web servers, html/css etc. and even bootstrap, jQuery etc. helped. I did quite a lot in JavaScript actually and used the DataTables jQuery library. Also, my self-learning at home had already primed me for heavier use of lambdas and while I hadn’t covered LINQ until then it was something I could pick-up quite quickly. I could work on learning Owin / MixedAuth code at home too as I’d set-up Windows Server 2016 mostly for Active Directory and to host a NSF share I could use in it’s ESXI host to access a USB pass through disk.

There were a number of small to medium software updates I made to existing software. For example, for a winform application used to process emails, I was asked to provide automatic highlighting of key words that the team manager could maintain centrally. That was quite fun actually as it involved more “classic” programming processing data. In this case working with Syncfusion controls. It was just satisfying . Like Sudoku. Actually that software was in two parts … one scheduled part pulled down emails and saved some of the content to a database … I actually cleaned that somewhat and turned it into a PRTG sensor providing JSON feedback.

Some of the aging software used Web Services that needed tweaking. It helped that I was already somewhat familiar with IIS.

Smaller new pieces of work I created included automatically updating an excel spreadsheet from a SQL Server database that was updated by a windows app scheduled to update from a WebApi of a cloud provided service we use. Another small piece of work was a quick winform app to drag an xml document onto where some specific line-endings needed changing to make the document palatable to another process.

One big piece of work I did was a refactor of a VB.Net software that processed a large quantity of data coming in – in many formats including csv, xml, classic Excel and xml Excel, and other bespoke types. I felt I had to refactor the software to give it much more structure and apply basic DRY to it in a substantial way (removing 1000’s of lines of code although having to rewrite 1000’s of lines of code too), and at least a nod to SOLID, after I was tasked with updating it to handle changes in data input including new formats. I actually made it 100’s (literally – truly literally 100’s) of times quicker using basic data-structures and efficiencies … but mostly by not using Microsoft.Office.Interop.Excel !!! Hmmm – I used two libraries I think – EPPlus and maybe NPOI (both conferred some advantages) and created additional functionality and abstracted some functionality to C# (much nicer for my tastes). Using these libraries also meant my colleagues no longer had to close other Excel sheets when using the software (it had previously forced them closed!). This considerable update greatly reduced wasting time waiting for the software and disruption to work flow. Incidentally I also fixed various errors and bugs while I was about it and enhanced other file-explorer functionality etc.

Disclaimer … I’d used Interop.Excel myself on a fairly large piece of efficiency software I’d been tasked with that never really made the grade. By default really as I’d recalled COM objects from more than decade earlier playing with vba – and then of course Active X! I didn’t know better and was frustrated with slow processing times. Well – I learned, and then applied that learning to someone else’s software and came through in the end.

There were a number of techniques I discovered and applied to several software when appropriate. For example, Costura.Fody for creating a single .exe that I used for some smaller winform projects. Now of course, .NET Core 3 does that anyway! Also, MSI installers were used a lot, and some propriety installer creation technology that only worked with Visual Studio 2015. So I learned how to use WIX Toolset instead. Declaring all those files in xml … ugh … you can see the motivation in preferring a single .exe 🙂 And such things lent themselves to the Unix philosophy and could be used in pipelines more easily etc. I did try building self-updating software that made use of the helpful characteristic of a single .exe that did work – but that was for a software that never made the cut and I never really had the opportunity to look at using the technology for our other software (besides it needed more publisher security checking ideally), thinking to avoid deployment headaches. SlowCheetah was particularly helpful in saving time switching between Debug and Release e.g. for Database connection strings in particular.

Another thing I introduced to DRP software was Dapper. There’s a good case to be made for code-first Entity Framework from a Dev perspective but I was privileged to work with some very able colleagues possessing a great deal of SQL Server knowledge and for existing software there was tendency to use stored procedure calls. And some software had hard coded SQL that at least needed sprucing up.


Of course, a lot of stuff didn’t work out too. My knowledge of email technology ranged from running a server at home, DNS related settings, and processing emails themselves at work. I myself get frustrated with lack of utility in standard email clients to really examine properties of emails, and identify issues – e.g. malformed parts and malicious crafting of them, attachment issues, which I thought would help me debug email processes, and IT with email issues. Also I reasoned it would be helpful to read Imap emails without marking them as read, disturbing automated processes, but still maintaining a complete audit of User access to emails for Data control reasons. Nb I must stress that any automation implemented in no way made decisions based on client data but is purely to reduce repetative processing bottlenecks. I’d become quite familiar with EAGetMail libraries for example and was getting better at winform apps by this time. I put a fair bit of work into it but never really got people onboard with it and priorties changed.

Address and postcode processing projects got left behind as agendas changed. And some work I produced – where maybe I made mistakes myself like relying on the aforementioned Interop.excel, but where there wasn’t a strong brief, didn’t quite deliver a sufficiently compelling product to become regularly used.

Obviously all the stunted efforts were still beneficial to me, being great learning opportunities, but I was well aware from reading and podcasts how software development and delivery should be done so of course I was disappointed in myself when I simply lacked the experience and skills so far to bring about such effective changes.

Other software I worked on included an Android application (in Java, and using Gradle / Android Studio). Some of my changes never saw light of day unfortunately including adding business meta information as overlays to pictures taken that I was quite proud of. I was able to apply some of my VM and networking knowledge for VMs so that I could install Android Studio in a separate VM to avoid clutter, use an Android emulator outside of that with virtual networking, and also pass through a WiFi USB dongle in an isolated way to Android Studio on the same virtual network so that my rooted phone could use Android Debug Bridge over that isolated WiFi rather than be put on the main network. I did spend some time looking at MVVM implementations in Xamarin, but there was still a tenuous supporting link for the Android app. Besides, trying to get the POS printer Java SDK ported to Xamarin was proving to be very tricky.

I was working on enabling remote builds of the Android App for my colleagues. Not using a sensible CI/CD deployment as I had no experience of that at the time, but using ASP.Net Core 2 Web API to control the long running processing (out of process somehow – I forget the technique), passing build parameters to the Gradle file and controlling another .NET Core console app that created a SQL-Lite database based on CSV’s. I wanted to automate much of the build process and make it more accessible in a step-by-step fashion via browser. I used ace for file editing in the browser, and handsontable data grid for editing CSV’s (and SQL-Lite directly). To update the browser with build feedback and make sure exclusivity was maintained etc. I used Signal-r for the first time.

Unfortunately this was turning into a more substantial project than I initially estimated and as we, DRP became part of Bristow & Sutor, I had to prioritise (expecting the build process to be simplified thanks to other developments). And so this rather interesting project had to be abandoned as we approached the end of 2018.

Obviously I can’t talk about everything I worked on – if I think it might reveal too much business information.

show less

January 2019: Becoming more integrated with Bristow & Sutor

show more

Another successful family owned business although much, much larger, Bristow & Sutor, absorbed DRP. And they already had an experienced Dev team. But based at their headquarters, over 100 miles away.

I thought that’d likely mean an end to my role then, being a fledgling Junior Dev based in Manchester – especially as all the stuff I’ve been talking about so far would be replaced over time with more forward-thinking ways of doing things. This interested me … I’d read The Phoenix Project and other DevOps material and had followed many podcast episodes related to such subjects as this and microservice development.

Regrettably I can’t say that much publicly about my work with Bristow & Sutor as much of it pertains to developments not yet in the public domain.

Anyway. I thought my head had been mangled quite enough already with all the different languages and technologies I had been switching between, but now the pace really picked-up. And it was difficult! So much to learn, so much change, and not in an ideal physical environment, with glitchy distance communications to begin with and slowly developing integration of IT infrastructure… everything that was challenging to me especially lol. BUT: it was exciting!!!

Agile

I’d been on my lonesome, effectively, and using an older on-prem version control system, and suddenly I was working in a scrum, learning the tools that Microsoft offer for it, and with Git. Struggling to hear what people were saying in sprint planning meetings over Microsoft Teams, teleconferencing. Over time this got better and I think I got better at my story point estimates. As our needs changed we did change to a Kanban style of working.

Maintenance; Angular and JWT

There were many years of development of complex specialised, established legacy software to maintain, strongly coupled to a vast set of intertwined business rules. Very confusing to a newcomer in a distance relationship. I wish there was a comprehensive glossary – it would have been much easier being onsite all the time for this kind of introduction. More WPF than I’d previously worked with, but lots of SQL and other more comfortable references I could at least relate to. My access to processes & data was phased as Bristow & Sutor take their obligations to protect user Data (including from accidents) quite seriously, though that did impede my ability to follow processes and data-streams when trying to understand it or work with it in the beginning.

One of the first pieces of work – first of a number of pieces of work with this system, was on their web based interface for clients that used AngularJS although not quite in a 100% standard way. It was my first introduction to a serious SPA really, as such, although I’d already met some of the technologies while devising my own website two years earlier. My work included adding additional styled controls to filter data, also requiring updating of SQL, Web-API (C#), controllers in the Angular MVC, and some heavy JS/CSS. I also made updates to how JWT was used with regards to roles and permitted impersonation and switching sessions. This work and similar, which eventually included introducing new routing and services, I was given to understand, was to have a material time-saving impact on the large client user-base.

Micro services

I can’t give any details of much of the new work going on and must stay very generic. I was working a great deal with Visual Studio Code on Linux, with Docker and Docker compose … though I haven’t personally quite mastered Kubenetes. During my time there AWS and Azure was looked at; e.g. we worked with Azure and no-sql implementations. I spent some time learning and developing React/Redux on Node. I’d had previous experience at home with Node, NPM, and importantly NVM including on Windows (I use Chocolatey) which saved me the hassle of versioning headaches in my Dev environment. Already gone through that pain.

Painfully I’ve tried to switch to TDD but it’s a transitional process – practising the bowling game kata. (TODO: more kata practice!) And there was BDD stuff we used too. I quite liked Cypress for end to end testing.

It was the first time I’d used Docker (or any container) in a meaningful way (aside from early attempts to understand it at home maybe a year or two earlier) and now I’m hooked. It was quite hard working in Windows with it (although it’s become better with more WSL integration, using Linux tooling and of course PowerShell runs well in Linux… oh and I really want to try WSL2). Many of us switched to working in Linux to make life simpler. It was interesting working on image layers, using Personal Access Tokens that could be passed in the build process in such a way that they’re not a security risk and so that they can be integrated in the CI/CD build pipeline more easily.

During my time there we did some work with Kafka.

And that’s all I’ll say here about that.

show less

Programming skills

Over the last couple of years I’ve looked at so many things I think my C# has diluted a bit. Before starting my first programming role I understood factory injection using reflection. Now I’d have relearn it. My use of R# has dropped off a lot and my understanding of some of the concepts have eroded. On the flip side I’ve had experience of using many different tools and working on a number of projects, and so my “familiarity” with C# – once in the zone, has become more instinctual. I’ve learned things about myself – for example, I naturally seem drawn to composition rather than classic OOP. After all that work trying to figure out Contravariance. 🙂 At DRP I tended to program to interfaces a lot at first but the nature of the programming seems to have changed over time. I have used a lot of LINQ and built-in delegates. I’ve gone over Async Await a number of times now but always seem to need a refresher – though I have marshalled threads in a fairly sophisticated way in WinForms and I did learn how to get WPF (for a PRTG watchdog) to start-up as Async.

I want to spend time learning React/Redux better … but that’s better modern JS and a framework! And there are so many tools and techniques for things that there seems to be no time for the language I’m most interested in. I would like to make more time for it. I’m going to try to work on home projects where I deliberately apply some of the techniques I’ve learned from work and demonstrate more of my C# … though so far, I seem to have spent more time on hardware, IT-configuration, PHP fixes for my website, and now Python. But I am working with .Net Core 3 – on the desktop, on the Pi, in containers, and with Remote Debugging.

November & December 2019 – working on home projects

Ended time at DRP

Working effectively in a distance relationship in a non-ideal (for me – being a bit more sensitive that way to call centre activity etc.) work environment on so much new stuff was hard! For the usual kind of roles there I’d recommend DRP as a nice to place to work by the way. With 2 to 3 more years of experience I’m sure it will become easier to work primarily remotely, but for me, and with no more DRP work to do locally, it made inevitable sense to part ways with Bristow & Sutor / DRP although we remain good friends. I have no regrets at all about working there – I learned so much and felt I’d made some good contributions. My termination date was 31st October 2019.

I’m sure I could have found more work straight away but I wanted to get a lot done at home first.

Big Tidy-Up

show more

During October to November I started a new “Big Tidy-Up” as I like to think of it. This is a non-trivial task for me and is worth mentioning as it ate-up much time. I have little space to speak of in the little house my partner, Liz, and I share. I got rid of over 100 DVD’s (err … archiving them for Plex first and taking photos). I also “archived” (I mean to hard disk) over… 300 maybe… software disks I’d accumulated since the 90’s. That also meant taking photos and notes etc. In this Big Tidy-Up I also got rid of 4x stuffed dustbin bags of clothes (but taking pictures of everything first… yes… I’m neurotic). There was another round of a huge pile (large crate volume) of old gear I surrendered to Salvage Guy (my name for the guy who takes our discards). And the biggest consumer of time was getting rid of books. For example, I’d accumulated some 200 much loved Doctor Who books since the 80’s. I wanted to keep them but had no space, so I scanned them. Maybe in 30-years if I’m still around, an AI will read them to me.

Scanning them was interesting … I made it more efficient. Earlier that year, in anticipation of what I wanted to do, I’d bought an IPEVO V4K specifically because it had Windows, Mac, and Pi SDK software I could build on. For my book scanning I very quickly / roughly adapted the Windows example software to use Microsoft speech synthesis to give me an audio countdown for each automated scan, along with periodic automated refocusing, and with a simple pause ability. This is why I love programming! But even so, over 27,000 scans – those books and others I was getting rid of, took some serious effort & time.

A sample of stuff I’ve scanned / archived.

“Tidying-up” also meant sorting through hard drives more … not sure how many TB I have but it must be at least 50TB. Ideally I need more to make it easier to sort. I gave away most of my 1TB and smaller drives to colleagues and friends. I have some dating from the 90’s that still work – need to sell them on eBay! Already wiped them though. Speaking of wiping things I had to destroy some failed equipment e.g. original Go Pro and a Dell X51v, and a Windows tablet that I couldn’t boot any more … just to make sure data couldn’t be retrieved (to be on the safe side). I spent some time trying to fix things but it was an unprofitable exercise.

Tidying-up also includes disassembling old projects from 2017 that I won’t be continuing now I’ve moved on to new projects. Rescuing and sorting as many parts as possible. In general I’ve sorted the mess accumulated from the last couple of years.

Then of course there was more scanning. I’ve been “scanning” for decades. School exercise books, college, university. Receipts even. All correspondence. Maybe it’ll be interesting one day after I’ve long gone.

Messy room while working on Arduino UPS project
This is what things are like when I’m tidy!!! This is why I have to tidy so I can work on projects.
show less

Home Projects

While at DRP there were so many new things to learn all the time that I had little energy for home projects so when the opportunity came to resume them and with more knowledge, I jumped on it.

I have a problem at home. Too many batteries.

LiPo
Lead Acid
LiFePO4
WHY?

At least 35 LiPo batteries. (Actually I’ve thought of 3 more since … and probably have over 40). (Update 23rd Dec: actually it’s over 50!).

Dangerous?

show more

They’re not absurdly dangerous, even when holding so many at full charge and using pass through as I do for UPS reasons. But if each one has the same chance of starting a fire under these conditions as someone winning the National Lottery having bought a ticket, then I have 40+ tickets. And every week it’s like I’ve bought another 40 tickets as the batteries degrade. And years are passing now. And with each passing year, the risk factor increases. 5+ years under these use-cases and I would be very wise to get-rid.

nb: in better electric cars – especially Teslas of course, and also for buffering especially the e-tron (e.g. don’t charge or discharge beyond a buffer point so the cells are less stressed), cells are managed very well so they don’t exceed over-discharge rates (generally unless you choose to disregard warnings) and never exceed over-charge rates (in theory) and are heated or cooled to optimum temperatures and so on. Teslas’ have very little Cobalt and a lot more cells which should make them a bit more volatile in layman’s simple terms but despite the odd media sensation celebrating public failures even though the failure-rate is much less than gasoline vehicles when it comes to fires, they are extremely safe. In my case though I can’t guarantee the quality of the devices I have and I am using some of them in ways they’re not really designed for, possibly stressing them, and they are only designed for lesser life-spans than the electric vehicles.

show less



About 9x 12V Lead Acids?

I can’t afford fancy UPS’s with User Servicable batteries. I still replace them but it’s a bit of effort. In our house I don’t rate their chances of surviving much longer than 2-years (APC claim 3 to 5 years on the new UPS I’ve bought from them – will have to see). It’s just a nuisance. If I could afford them I’d give these a try: https://www.backupbatterypower.com/collections/batteries/products/high-discharge-rate-lithium-lifepo4-battery-12-volts-7-2-amp-hours

LiFePO4 cells have been around commercially at least since the 90’s. I followed Thundersky on the ‘net from the 90’s and other battery developments. I’m frustrated they are still not widely adopted – especially in a fractional C charging configuration for planned operation over it’s calender life – maybe 15 to 20 years even. They’re not risk free … but they’re angels compared to LiPo!

PIco UPS (and 2x 18650 LiFePO4 cells shown)

show more

The PIco seemed to promise what I wanted (at least for an individual Pi). LiFePO4 cell support, sleep states, lots of monitoring and configuration. In practice, I can’t see how to change the upper charging limit to make use of a fractional C policy, and, when using headless Teamviewer one day it was reporting undervoltage! Even though I had it running from a 5A 12V supply and the PIco promises 3A for the Pi. The documentation is hard to follow (simply because there are so many hardware and firmware versions and dead links etc.) So … back to the drawing board. I’ll still try to make use of it though – I had to buy the board itself from Germany (in German), and the cells from Italy. There is a chronic shortage of LiFePO4 technology like this in the UK. Despite being a going concern in China since the 90’s … they have more electric cars there than anyone else and most of them, I think, run on LiFePO4.

show less

The old project took so much work just to get that far, but the complexities of an MCU BMS at that time required more devotion than I was able to give. There were other demands made of me that I couldn’t ignore. 🙁

The battery I’m using in my current project (though it was £128 when I bought it!!!):
https://www.allbatteries.co.uk/lithium-iron-phosphate-battery-nx-lifepo4-power-un38-3-153-6wh-12v-12ah-f6-35-aml9133.html



Sigh. People are always wondering why? 🙂 Sometimes I really wonder too. Wishing I was normal and would be happy watching football and eating fish and chips and not thinking about much else in Life. Why do I put myself through this pain? I suppose it’s just my Autistic obsession and anxiety driven hyper-awareness and alertness. I want things that don’t exist to solve what seem like an ever increasing set of problems rippling out from some central premise of function with certain guarantees of reliability and accessible insight, ergo I have to make them if I want them. Even though it’s hard and disruptive and expensive.

  1. I’m a tinkerer as already established. I also have general Anxiety and obsessive interests. Basically I don’t think like normal people.

  2. IF I had lots of money I’d use the Cloud more. And pay other people to make the things I want that don’t exist. But I haven’t so I don’t. (Update: actually collocation seems the way to go for me!).

  3. IF I had lots of money for many-threaded efficient machines I could use as VM and Kubeneties cluster hosts then I would get them (at least one for failover and I’d still need a local back-up destination and a cloud destination for most important / deltas etc. But I don’t.

  4. I could buy OLDER server gear with better fail safes and remote-management built-in or available options … but it would use far more electricity than I’m willing to sink, work hot, be more fragile etc. – I have considered it and as time passes I might consider it more. I’m not making any money out of doing all this – it is just a home lab fantasy but where I have obligated myself to others too. (As I’ve become more confident with Linux then I might be able to mitigate and migrate some anxieties to the cloud within an affordable package). (Update: probably going to do this to use in a collocation).

  5. If I want to minimise my use of bigger ageing machines and not be totally reliant on them, I have to run some things on smaller things. There is an upward trend for computational performance vs footprint and power requirements. When Google started Kubenetes I believe they called it Borg? As in StarTrek Borg? That cries out for having more distributed computational nodes. Hanselman has championed some Pi implementations of this principle. And if power requirements are low and functionality more specific then it should be easier to avoid so many power cycles, there should be less risk of it going wrong because of co-hosted processes, it should be easier to use watchdogs/sleep-states etc. and more efficient power conversion for LiFePO4 UPS battery back-up (if more of it existed). Etc. and it should be relatively cheap to deploy physical machine redundancy in different parts of the house so if there are problems (e.g. fire or flood) in one part I’d still hopefully have some incoming telemetry from another machine to let me know sensor readings while I’m away.

    I particularly want to run things like Gitlab on an independent machine. It could still divvy out tasks to higher powered machines for CI/CD, but at least I can visually see it physically and know exactly where my most precious data is, and eventually physically lock it away. (The Pi … not even the Pi4 isn’t any good for encryption unfortunately … I did try it with LUKs and that was horrible!).

  6. Then I ask myself the question: how many things can I run off one mains UPS? The UPS will have to support NUT? I’d need a master, and some kind of communication with the other devices. And without further measures I’d be assuming they aren’t lying about being off when they stop responding after being to told to shutdown. How long should the UPS wait for devices if it’s not sure, wasting precious battery energy? (This is where the remote management part of my project comes in). Does anything stay on to count the passing time? Keep an eye on the environment? Report back to me? How is that powered? Do I have to power the big mains UPS up just for one device to have a quick look around? And then if the other devices are set to come alive again with AC ON, maybe an ESXI server which slowly boots in staggered formation all the VMs over several minutes but then has to be told to shut down again? That just doesn’t sound right.

  7. Other devices. Cameras. Imagine you have anxiety like I do. If brownouts (and they do happen) crashes your cameras when you’re away but you don’t know what’s happened then that can just fill you with more anxiety! Fire? Break-in? Is everything off-line? Mains tripped? Extended blackout? OK. What about using a standard WiFi or Zigbee mains switch? So you can at least power cycle the thing remotely? Not a bad thought but remember even the best WiFi ones can be hackable and not 100% reliable against spurious switching or losing it’s settings which might introduce even more chaos and uncertainty. So far I’ve found Zigbee ones to be unreliable. They don’t seem able to check if the command has worked properly or not and the types of relays used can stick even. And often Zigbee is used on 2.4GHz rather than more appropriate (for internal spaces) lower frequencies. The way meshes work is amazing but I’ve found it to be a bit hit & miss. Even if you could power cycle them, maybe a brown out could corrupt the SD card and depending on the firmware make the camera unbootable. I’d rather at least protect from brownouts and short blackouts with a UPS. But options for that, commercially, seem realistically limited to LiPo implementations which I don’t know for sure are any better than my pass-through power-banks, or just using a lead-acid UPS, or building your own thing. There is generally no intelligence with the available solutions. Not for programming easily and tying in with other machines etc.

    Then there’s the router of course. And 4G fail-over. Etc.

    And just using stupid UPS’s without talking to them will cope with brownouts and short blackouts but an extended blackout or where the leccy comes and goes but mostly goes … still might have corruption issues even using more robust file-systems on SSD’s and endurance SD cards that I now use on my other devices.

    So much uncertainty … drives my anxiety.

Using the large capacity LiPo power-banks with pass-through did work for brownouts and quite long blackouts. And potentially averaged current draw. But they’re stupid. And don’t age well. And might turn on me.

Unreasonable goals

What I’m really trying to do is unreasonable. I simply don’t have the time or the space to build everything I need for a reliable set-up at home where I keep lots of equipment going in a “server” capacity. The equipment I need to make it reasonable simply doesn’t exist or at least I can’t find it (at a reasonable cost).

I don’t even have time to effectively write about everything I’m doing or even finish this page nicely.

Collocation

So now I’m actually looking into collocation costs and availability near me. It might be simpler to purchase 1U of space, buy a refurbished server, squeeze in a tiny 5V Open WRT device (as an OpenVPN client) for whatever flavour of IPMI there is, running it off the PSU standby output (and pay for an extra IP line for that and maybe a separate Nic for the hypervisor host management), and using ESXI probably have a VM for pfSense and just do everything virtually. I could connect using OpenVPN (as I’m familiar with that) and pass 802.1Q vLAN tags. I’d still need 4x Windows VMs I think, for now … but everything else I’ll try to do in containers where possible on Linux – probably Debian, and minimise VM maintenance. That way I could have a few … 8TB Iron Wolf Pro’s say (no Raid required for my application), a 1TB SATA SSD, and I’ll be able to host most of my media data and services on that. With less anxiety about theft (though data centres aren’t impervious to theft!), fire, and brown-outs / black-outs while away. Plus there are often services where you can pay someone for hands-on stuff if absolutely necessary. So that’s the plan. When I’m working again and paid off some of this extra accumulated credit card debt.

Continuing with my projects but reducing scope

Meanwhile though I’ll still need the projects I’m working on for dependable remote access to the house and reporting from the house, and IoT stuff, plus I’m trying to containerise more and maintain less.

I hastily recorded a video on my phone. It’s in three parts – I’d assumed I’d be able to “join” them together on YouTube but it seems that’s changed in the last couple of years too and I can’t do that any more and so, as I’ve run out of time this side of Christmas, editing will have to wait.

I’m not keen on talking much – never been very good at it. I find it quite exhausting and actually my throat hurts and even my chest if I do a lot of talking.

I talk more about the main project I’ve been working on here:
https://blog.xarta.co.uk/2019/12/brief-introduction-to-my-mains-power-management-box/

Looking for another job

Now I have to look for another job.

Preferably within easy commuting distance of Stockport.
Preferably with a focus on C#, or combination of C# and maybe a JS framework – preferably React or Angular.

In my dreams I’d love to work on a mix of low level and high level interface stuff a bit like I do at home, or maybe even robotics where I get to use a bit of physics, but it’d also be nice just to work on one “thing” for a while – maybe web api’s for example, or maybe React + Redux. It will be nice to continue learning how to make best use of containers and microservices, though it might be nice also to work on legacy systems for a bit and just become expert in something. I also enjoy refactoring. Actually I really like refactoring – I seem to be pretty good at that! It’s relaxing. And easy. Even when the language is new to me.

I’ve had over 2-years experience looking at a variety of things – too many things actually rather than focusing enough on one thing to gain real expertise, but I regard myself as more a mid-level programmer now rather than junior. I haven’t anything against a junior title and would be happy with a more lucrative and interesting junior role. Ha – yes – I have on occasion fantasised about getting a job where I’m simply instructed: “learn everything there is to know about that one constrained-scope thing and make it better as according to this detailed itemised-with-metrics brief”. That would be heaven. I could probably do something like that better than a lot of peers. Life of course is more complex and messy than that.

My biggest goal in Life is to be tidy. Before I die. Hopefully. I fear entropy shall always reign supreme. Although, … maybe an AI optimised to defeat entropy might prevail over physics? If the Universe itself became self-organising in a way to reverse entropy? Hmmm. Megalomaniac thoughts returning. 🙂