Screenshot CLI

I built a small open-source tool called Screenshot CLI.

It does exactly what the name suggests – takes screenshots from the command line.

It’s written in TypeScript and uses Playwright under the hood. You can point it at a list of URLs or just pass in one site, and it will spit out clean screenshots. The idea started as a quick way to document and test pages without having to click around manually.

Along the way I added a couple of extra features:

  • It can generate before-and-after comparisons if you want to track changes between versions of a page.
  • There’s a simple HTML and PDF report generator, so you can keep a tidy record of what you captured.
  • All the data gets stored in .jsonc files, which means you can re-run reports later without taking screenshots again.

The project is open source, and I’ve tried to keep the code readable and modular so others can fork it or extend it.

You can find it here: https://github.com/refactorau/screenshot-cli

If you try it out and have suggestions, feel free to open an issue or send a pull request.

Meshtastic: Open-Source Mesh Networking for the Real World

Meshtastic is an open-source project that makes it possible to build your own long-range communication network without relying on towers, phone lines or the internet. Using affordable radios that operate on LoRa technology, Meshtastic devices talk directly to each other and pass along messages through a mesh. This means that a group of users can stay in touch even in places where there is absolutely no communications infrastructure.

For anyone who spends time in rural and remote Australia, this has obvious value. Once you have the devices, you own the network. There are no subscriptions, no SIM cards and no hidden costs. The only requirement is to set the devices up before you go into the field.

How it Works

Each Meshtastic unit contains a small LoRa radio. LoRa stands for “long range” and is a radio technology that trades speed for distance. It can carry short bursts of data such as a text message or a GPS location for many kilometres, depending on terrain and antennas. Every device is also a relay. When one unit receives a message, it can forward it on to the next. This is what creates the mesh network. With enough devices in play, a message can hop over hills, through valleys, or across large properties until it reaches the intended person.

Messages are encrypted end to end, so even though they are being rebroadcast through multiple radios, only the intended recipient can read them. The system is designed for low power, so these radios can run for days on a single battery or be hooked up to small solar panels for continuous operation.

Most people use Meshtastic with a smartphone app. The phone connects to the radio by Bluetooth or USB and provides a familiar interface for typing messages or seeing where others are on a map. But the radios can also run on their own. Some models include small screens and buttons so you can send preset messages or read incoming notes without a phone at all.

Telemetry and Sensors

One of the most powerful features of Meshtastic is how easy it is to connect external devices. With very little effort, you can plug in telemetry sensors to measure things like temperature, humidity, air quality or gas levels. The Meshtastic firmware can read this data and broadcast it across the mesh automatically.

This opens up a wide range of applications. Farmers can check conditions in remote paddocks. Researchers can gather environmental data without returning to every site in person. Community groups can set up basic early warning systems using motion sensors or water-level detectors. Because the radios are inexpensive and battery friendly, they can be deployed widely without major cost.

Optional Cloud Connection

Meshtastic is designed to run entirely off-grid, but there are times when you may want to bridge the network to the wider internet. This is possible by setting up one node as a gateway. That node remains part of the local mesh but also connects to Wi-Fi or another internet link. Once it is online, it can upload data to cloud servers or send alerts to people outside the mesh.

For example, a set of Meshtastic devices could monitor conditions in a national park. Local rangers on the ground would receive the messages directly over the mesh. At the same time, the gateway node could log the same data to a cloud dashboard for managers in the city to review. This hybrid model gives you the best of both worlds.

Why it Matters

Australia is full of places where mobile coverage ends. Whether it is bushfire zones, flood-prone valleys, remote farms or conservation areas, the need for reliable communication remains. Meshtastic provides a practical, low-cost option. It will not replace the internet, and it is not designed for voice calls or video, but for simple text and sensor data it works brilliantly.

For organisations working in remote Australia, this type of system can mean faster responses, safer fieldwork and more reliable data collection. It is open source, easy to adapt, and constantly being improved by a global community of developers. For those looking at the future of IoT and remote communications, Meshtastic is ….. Fantastic!

What on Earth is Cat 1bis?


And why it’s about to change the game for connected devices in Australia and New Zealand

If you’re involved in IoT, asset tracking, remote monitoring or emergency connectivity, you’ve probably started to hear about Cat 1bis. It’s not hype. It’s a new cellular standard that solves some long-standing problems with deploying devices across Australia’s and New Zealand’s vast and varied terrain.

So what is it exactly?


Cat 1bis in plain terms

Cat 1bis is a simplified version of the existing LTE Category 1 (Cat 1) standard. The key difference is that it only requires one antenna instead of two. That makes it easier and cheaper to manufacture devices, especially compact GPS trackers, battery-powered sensors, and anything designed to run in the field for long periods.

It supports relatively high data rates, around 10 Mbps downlink and 5 Mbps uplink, and allows for features like power saving mode and extended discontinuous reception. That puts it in a sweet spot between the slower but ultra-efficient NB-IoT, and the faster, more expensive Cat 4 LTE modems used in phones.

For most IoT use cases, especially mobile tracking, Cat 1bis is fast enough, efficient enough, and now, finally, available.


What’s happening in New Zealand?

New Zealand is currently ahead of Australia in field testing and deploying Cat 1bis. Digital Matter, one of the major device manufacturers in this space, is already supplying Cat 1bis-enabled devices such as the Oyster3 Global and Remora3 Global, which are being used in active trials across NZ.

Vodafone New Zealand (now part of One NZ) has confirmed that these devices are registering successfully on the network using Cat 1bis, with no network-side changes required. That means Cat 1bis is fully operational on their existing LTE infrastructure. These trials are focused on GPS tracking and IoT deployments across logistics, conservation, and rural asset monitoring.

You can see confirmation from Digital Matter here:
Digital Matter: 4G Devices – FAQs
Device Compatibility Map


Why this matters in Australia

Australia faces similar challenges to New Zealand, large areas of land with limited coverage, infrastructure that needs to be monitored remotely, and increasing demand for mobile and resilient tracking solutions.

Cat 1bis offers:

  • Reliable LTE connectivity across existing networks
  • Global roaming support, ideal for devices used across borders
  • Moderate data throughput for applications like GPS tracking, event logging, and emergency triggers
  • Low enough power use to support long battery life in off-grid deployments

What makes it particularly relevant now is its growing compatibility with LEO (Low Earth Orbit) satellite systems. That means devices using Cat 1bis today can potentially fall back to satellite when outside of cellular range, opening up new options for emergency communications and high-resilience IoT.

While Cat 1bis is still gaining traction with Australian telcos, we expect rollout to accelerate quickly, particularly given the regulatory push for improved remote area connectivity and interest in hybrid satellite-cellular IoT deployments.


Who is building with Cat 1bis already?

Digital Matter’s new generation of devices are a good indicator of where things are heading. Their Oyster3 Global and Remora3 Global both support Cat 1bis and are being used in agricultural, environmental and logistics settings. These devices are compact, battery powered, and built for the harsh Australian environment.

More chipmakers including u-blox, Quectel and ASR are also building Cat 1bis support directly into modules, so we can expect more compatible devices on the market by the end of 2025.

For any organisation rolling out IoT infrastructure today, especially in the public sector, it is worth choosing hardware that already supports Cat 1bis.


In summary

Cat 1bis may not be a buzzword, but it is a quiet shift that will have a big impact in practical IoT.

  • It runs on existing LTE infrastructure with no network upgrades required
  • It supports real roaming and fallback options where NB-IoT and LTE-M do not
  • It allows for smaller, more affordable devices with long battery life
  • It is already being rolled out in New Zealand, with Australian support not far behind
  • It pairs well with hybrid connectivity strategies, including satellite fallback

If you are deploying devices that need to operate in remote or challenging environments, Cat 1bis should be part of your connectivity plan.

Now is the time to talk to your hardware vendors and telcos. The support is coming and it is coming fast.

Why Australia Is Built for Real IoT

Australia is a country of harsh distances, complex landscapes, and practical people. It is also one of the best environments in the world for meaningful Internet of Things deployments. Not the flashy kind that controls your coffee machine, but the kind that helps you prevent a flood, catch a feral pig, or secure a remote gate before someone gets hurt.

This is IoT (Internet of Things) with a purpose.


Our land demands it

You cannot physically monitor everything. It is not just about cost. In many places, it is simply not possible.

One council officer cannot check hundreds of traps across a national park. A property manager cannot be on every site when the water starts to rise. A landholder cannot be expected to detect every fence breach in real time. But with the right sensor in the right place, they do not have to.

With LoRaWAN, NB-IoT and Cat-M1 now widely available, it is possible to deploy small, low-power devices that send useful data over long distances. These sensors can run for years on battery power and operate far beyond mobile coverage or fixed power infrastructure.

This technology was made for a country like ours.


Disasters are regular, not rare

In Australia, floods, bushfires and extreme weather are not outliers. They are part of everyday risk planning.

We know that early warnings save property, reduce recovery costs, and in some cases, save lives. We also know that most infrastructure failure starts small. A sump pump stops responding. A low point starts to collect water. A culvert overflows after hours.

If you can catch these moments early, you can act before it becomes a disaster. That is what IoT enables.


Biosecurity is a national priority

Feral animals, invasive weeds and pest insects are eating into our ecosystems and our economy. They move quickly and quietly through landscapes that are difficult to monitor.

With GPS collars, motion sensors, trail cameras and AI-based recognition, we now have better ways to observe and manage these threats. You cannot eliminate what you cannot find. But when you know what is moving, and when and where it is moving, you have options.

This applies equally to feral pigs, wild dogs, exclusion zones, and remote access points.


Most infrastructure is unmanaged

Across Australia, infrastructure is ageing, remote, and under pressure. That includes everything from pump stations and levees to gates, roads, carparks and shared basements.

Most of it is not being watched. Failures are often only detected after the damage is done. But with lightweight IoT systems in place, it is possible to monitor key points across multiple sites without needing a team of technicians or a dedicated control room.

You do not need a smart city budget. You just need the right tool in the right place.


We need Australian systems

If the goal is long-term reliability, Australian organisations need technology that is designed for local conditions and hosted onshore. That means platforms that are built with open protocols, work well with known hardware, and are simple enough for regional teams to use without training courses or consultants.

It also means no overseas lock-in, no dependence on one cloud provider, and no hidden licensing traps.

Australia deserves systems that are clear, accountable and sovereign.


What we do now matters

Over the next few decades, our ability to manage water, land, and biodiversity will depend on the quality of the data we collect. IoT is not the whole answer, but it is a vital part of the toolkit.

The challenge now is to stop trialling and start rolling out.

We already have the networks. We already have the hardware. We already have proven use cases across government, agriculture, conservation and property.

Now it is about leadership. Practical thinking. And building things that last.

If you are ready to move beyond proof of concept and into real-world outcomes, we are ready to work with you.

UnrealFest24

Last week, the team from Refactor attended UnrealFest24, organised by Epic Games. This two-day conference focused on the Unreal Engine development platform. While game development is not a core function of Refactor, many challenges faced in game development are relevant across various industries, including those of our clients. It was also a great opportunity for the team to get together and do something a bit different, exploring new technologies and methodologies that could inspire innovative solutions in our own projects.

The conference featured in-depth sessions on Unreal Engine (UE) and Unreal Engine for Fortnite (UEFN), as well as a variety of non-gaming topics such as education, business development, and production. Presentations covered advanced features like Nanite (infinite polygon rendering) and Lumen (dynamic lighting and reflection models), along with a general talks including a few by WetaFX on the creation of the short film “The War is Over.” There were also round table discussions and a developer lounge for engaging with Epic employees.

Overall, the Refactor team found the conference highly enjoyable and informative.

Steve: Last week at Unreal Fest on the Gold Coast, the standout talk for me was by Inge Berman, creator of “Cat Cafe” in Fortnite. Intrigued by her virtual take on a cat cafe, especially as someone allergic to real cats, I found her story fascinating. Inge, an artist with minimal coding skills, cleverly utilised existing Fortnite assets and imported some purchased Unreal Engine assets to create “Cat Cafe.” For example, she crafted a cave entrance by half-burying a boulder and placing two more on top to resemble cat ears. Her talk provided valuable practical advice, such as avoiding clumping too many assets together due to memory issues and spreading experiences across the island. She suggested choosing an island with plenty of trees for those wanting a forested environment to avoid excessive biome placement. Inge was also transparent about the financial side, offering insights on monetising a game and recommending third-party sites for effective marketing. Her talk demonstrated how artistic flair and clever asset use can create an engaging game without extensive coding skills, and still be profitable.

Rob: My favorite talk was delivered by Axel Riffard. He demonstrated the impressive capabilities of the Sony Mocopi system by capturing his body movements and directly displaying them on a Metahuman within the UEFN editor. Additionally, Axel used his iPhone to record his facial expressions, which he then mapped onto a Metahuman. The fact that he did this as a live demo was a highlight, showcasing the entire process from capturing to viewing in-game within Fortnite, all within the 40-minute talk. Axel’s presentation brilliantly illustrated the seamless integration of cutting-edge technology in game development, leaving a lasting impact on the audience.

Skip: I was blown away by Chris Murphy’s “Procedural Content Generation” talk. Watching someone create and modify photo realistic scenery from scratch and in real time was incredible. By quickly editing blueprint nodes and adding different meshes to an area fill, he created stunning imagery
of walking paths through mountainous dense forest. The power that Unreal engine brings to users like Chris was intriguing enough that I finished that day by downloading a copy to try out.

Tyler: The speech I enjoyed the most from Unreal Fest was about Advanced Blueprinting Techniques in Unreal Engine, presented by Senior Technical Artist Matt Oztalay. It was fascinating to see Blueprints expanded beyond their usual scope and my existing very basic knowledge of them. Matt emphasised that while there are multiple ways to achieve a goal in game development, some methods are more efficient than others. He highlighted how Blueprints can optimise memory usage, allowing for complex outcomes without requiring expertise in C++. By delving into the frameworks, systems, and design paradigms of the Blueprint visual scripting system, Matt showed that developers can push beyond perceived limits and enhance both efficiency and performance in their projects, empowering them to leverage Blueprints for sophisticated tasks without compromising development efficiency.

Wade: The talk by Ivan Ertlov and Caitlin Lomax, titled “The Legacy of Cthulhu Returns: Revitalizing a Legacy IP with Unreal Editor for Fortnite,” focused on the business and public relations aspects of adapting an old IP into a Fortnite custom island using UEFN. I enjoyed hearing about the challenges and advantages of this process. A key point was the balance between retaining the original gameplay of “The Legacy of Cthulhu” and making it appealing to the Fortnite audience. They discussed whether it’s possible to cater to Fortnite players while preserving the essence of the original game, or if it should stay true to its roots at the risk of alienating most players. Another challenge highlighted was the technical limitation of UEFN, which currently doesn’t support custom items or weapons without significant issues. This could hinder the studio’s vision, but it was encouraging to hear during the UEFN roundtable that this capability will be available in the future.

Matt: A highlight for me were the two talks by WetaFX on the short film “The War is Over.” The first talk was about taking the clients requirements for the look and feel of the movie from the artist point of view, and the research into character development and models for implementing the final product. They also touched on the importance of prototyping and failing early as UE was not a tool they had used in the past. The second talk was on the technical aspects of the movie, discussing some of the pros and cons of using a real-time rendering engine for production as opposed to the traditional render farms they have used to date. The importance of the quick iterations, getting the final product to the team at a rate of 6 frames per second rather than the hours or days per frame was invaluable and worth any real-time render issues they had encountered.

Heath: A highlight for me at Unreal Fest was the demo titled “Unreal Engine for Live Performance: Big Sand Case Study,” presented by Sally Coleman. This session explored integrating live performers into Unreal Engine, highlighting the unique opportunities it offers to unite games, music, and live performances. The presentation showcased how the animated sci-fi band Big Sand was created using Unreal Engine, utilising motion capture, streaming, and networking techniques. Big Sand’s first live shows were groundbreaking, pushing the boundaries of real-time performance in physical venues. Sally Coleman delved into the intricacies of creating a live performance within Unreal Engine, explaining the complexities with remarkable clarity. Seeing motion capture, streaming, and networking techniques work together seamlessly in real-time was magical. I learned not just about the technical aspects but also about the creative potential that Unreal Engine unlocks, with endless possibilities for merging games, music, and live performances. The work with Big Sand is a testament to that potential.

Confluence to WikiJS

Over at Refactor we have a client using on-prem Atlassian Confluence which was prohibitively expensive to move to the cloud version (Atlassian sadly recently stopped supporting their en-prem server edition). After showing them WikiJS we were tasked with the challenge of getting all of their content from the existing system into a WikiJS server. Primary objectives were to ensure better navigation, more helpful search and a generally more pleasant look and feel.

We started with an export of the existing confluence. This resulted in a folder which a whole bunch of html files, along with a few extra directories of attachments/css/etc. Our next step was to try to get them into a WikiJS installation.

We did not want to simply load the files into an existing wikijs service. If it did not work the way we wanted it we would of needed to restore from previous backups, or tried to delete files and fix up existing pages. Messy at best! To solve this we welcomed our friend Docker Compose. The WikiJS project provides a docker-compose.yml file so this made it super simple. git pull, modify the docker-compose.yml file to mount a directory we can use for transferring files into the container, docker compose up -d, navigate to http://localhost and go through the setup.

From there we dumped the confluence export directly into the mounted folder, told WikiJS to import all the files as waited. The results were… less than ideal. Navigation was non-existent. Confluence added a bunch of “This page was last modified by”, and “Most recently edited pages”, and non-content related items to each of the exported pages. We investigated any tools that were available to fix the exported files to something WIkiJS might find more suitable. We tried a few but did not find a tool that seemed to work “out of the box”, at least not with the exported site we were working with. Time for a custom solution.

This was not a project we could spend a lot of time working on. The result was a “quick-n-dirty” console program written in the go programming language, which does the following…

  • Compiles a list of all the files in the export. Saves them in a data structure with a “From” and a “To” key. Both set to the same value, as it appears in the original export file structure.
  • Loops through this list, and if it is a html file parses the file and tries to find if it has any “breadcrumb” section. If it does, works out what the hierarchy is for this page, and updates the “To” key to represent this new location
  • Loops through the list again. This time if its not a html file, it copies the file into a target directory. However if it is a html file, it will parse the file again and remove the breadcrumb section. At this point we want to drop as many parts of the document as we can, to cleanup any extra elements that Confluence added that we are not interested with. Then we needed to do two things. If the html file has been moved to a different hierarchy, every link in the file will need to be modified to point a point relative to where this file has been moved. Also if there are any other links to other files that have been moved, they will also need to be updated. The eventual solution was a horribly inefficient series of loops over the mapped file locations and the html document structure, but even for a reasonably large set of files go seems to scream through them at a fantastic pace. “quick-n-dirty” is the active word here! Finally the modified html file is rendered into its new position in the destination folder.

Now running the import into WikiJS results in a nice hierarchy, with images and attachments that are all located correctly, the site looks nice and the WikiJS search feature seems to work much better for our client than the Confluence site. Although the code is not exactly production quality, it fits its purpose well and is available on our github page.

https://github.com/refactorau/ConfluenceWikijsConverter