Many organizations are eager to adopt microservices, sometimes before they even know if they need them. Knowing when they fit a need makes all the difference, and sometimes, not using them is the smarter move.
There are some cases where a microservice architecture is your best bet:
If your project has to support multiple technologies that don’t naturally work together, microservices are a natural fit. Take my experience with the Whitelist Sync Web project:
Originally, this project ran on a 100% .NET backend with a Vue frontend. Later, I migrated to a Node backend with a React frontend. However, I still needed to support SignalR—Microsoft’s real-time communication technology—because client applications in the field were dependent on it. The challenge? SignalR server-side hosting is only supported in C#. Node cannot host a SignalR hub.
Removing SignalR from the project wasn’t an option (unless I was willing to rewrite and redeploy all the client apps—which was out of scope). The solution was to create a separate SignalR microservice: a C# project dedicated to SignalR, communicating with the Node backend through JWT auth and REST endpoints. A reverse proxy routed /hubs/ requests to the SignalR service, while all other traffic hit the React app. The entire setup was managed using Docker Compose.
Microservices can be helpful for extending existing applications. If you want to add new functionality using a different tech stack—or isolate new features for a big team—they let you do this without rewriting your monolith.
Splitting your app into smaller, independently hosted pieces means a failure in one service won’t crash the entire application. Of course, you can build robust error handling into a monolith, but microservices can make fault isolation easier.
Cloud providers offer load balancing for monoliths, but microservices can provide more granular scaling. Just keep in mind, if you don’t have heavy load or growth requirements, this might not be worth the extra complexity and cost.
Microservices let you mix and match tech: imagine a Node backend, a React frontend, and a Python microservice for AI features. Each part of your app can use the best tool for the job.
While microservices have their place, they also come with significant downsides:
Running multiple services means more infrastructure, more devops, and more cloud spend. If your application has low demand, this cost is often unjustified. Starting new projects with a single stack keeps things cheaper and simpler.
Multiple services means more to manage: logging, monitoring, orchestration (hello, Kubernetes), and maintenance. All of this adds to the operational burden.
Using cloud-specific services like Azure Functions ties your app to one provider. Migrating later is possible, but few businesses want to refactor dozens of microservices just to escape rising costs.
Deploying a monolith is straightforward. Microservices require complex CI/CD pipelines and orchestration. Tools like Fynydd fdeploy can help, but they add yet another layer of infrastructure.
With more moving parts, it’s harder to add features, fix bugs, and onboard new team members.
Microservices make authentication harder. Instead of just handling user auth, you now need to manage service-to-service authentication, which can be complicated and error-prone.
Given all these costs, it’s clear: Start simple. For most projects, especially those with low load or a single technology stack, a monolith is the best starting point. Design your application with modularity and future growth in mind, so you can break it into microservices if you ever need to. But don’t jump into microservices unless you’re solving real problems that require them.
Further reading: You Don’t Need Microservices (itnext.io)
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Modern computers, laptops, and mobile devices use solid state drive (SSD) storage, which is power efficient and fast! Since SSDs have no moving parts they're also more durable than hard disk technology in mobile scenarios.
But SSD storage does have limitations. Two primary concerns are:
Essentially, data is written to SSD storage as static charges in individual cells. These cells would normally hold a charge for a very long time, but the act of charging the cell is destructive. It takes a high voltage to weaken the cell barrier before the cell can be charged. And every time a cell is written the barrier is permanently weakened. Eventually the cell will not be able to reliably store a charge.
SSDs manage this problem in a few ways. One tactic is wear leveling, which means that data isn't generally written to the same cell. The drive writes to new cells as often as possible. This levels out the wear across all cells. Another strategy they use is to keep a bank of extra (hidden) cells available. When the SSD sees that a cell is sufficiently "bad", one of the "backup" cells will take its place. All of this happens in the background.
As cells lose their ability to hold a charge, the first symptom is a slowdown in reads. The SSD will try to read a cell, which sometimes returns a bad value (according to an ECC check), so it has to read it again, likely at a different voltage. Eventually the cell returns the correct value. But these repeated read attempts noticeably slow overall drive performance.
For computers and SSD drives that stay powered off for extended periods, you'll see advice that recommends turning on the device every so often. But all that really does is give the SSD a chance to mark bad cells, provided the device tells it to read or write to that bad cell in the first place. Some high end SSDs will perform periodic cell rewrites to refresh the data on their own, but consumer SSDs don't typically do this. To be clear: powering up an SSD does not recharge the cells or truly address these issues.
New SSDs can reliably store data for several years without power. But after actively using an SSD for months or years, it makes sense to begin periodically refreshing the cells. This not only ensures more reliable storage over time, it can also noticeably speed up SSD performance.
I ran some tests on my local workstation to verify these claims. I used a 2 year old MacBook Pro with an SSD boot drive that has remained more than half empty, ensuring lots of new cells were available for writes. It has had several OS upgrades and a couple format/rebuilds.
That Mac booted to login in 16.6 seconds. After refreshing the SSD with the same data, it booted to login in 14 seconds, which is over 15% faster. This indicates that overall performance should also improve, at least with regard to storage transfers anyway. So even on a relatively current machine there was a noticeable speed increase. As a software developer, the biggest benefit for me was the improved reliability.
So, if you want to refresh an SSD, following are some quick guides to help you through the process.
The easiest way to refresh your SSD on Windows is to use SpinRite (https://www.grc.com/sr/spinrite.htm). This is a time-tested, rock solid utility for hard disk maintenance and recovery, which can also handle SSD storage. Run this tool on level 3 to refresh all the cells and map out any bad cells. It will also work wonders on your hard disks.
Note: you need a computer with an Intel chip. SpinRite will not run on Arm.
Another way to do this without additional software is to make a system image of your drive using the poorly named "Backup and Restore (Windows 7)" control panel. This clones your entire drive (even the recovery partition) to a USB flash drive or other external media. You can then boot into recovery mode and restore the entire drive from that system image. You'll end up with the same PC with all your files intact. And you will have a backup of your drive for future use.
Both of these methods will return your SSD to like-new performance, and ensure longer data retention.
Unlike with Windows, there are no great utilities like SpinRite for modern Apple Silicon Macs. But fear not! There is a way to refresh SSD cells using the built-in Time Machine feature. And it's pretty easy to use. You will be backing up your Mac, then erasing it, reinstalling macOS, and then restoring the backup.
Connect an external storage device to your Mac and configure it in Time Machine as your backup device. Then run a backup.
Note: some applications, like Docker, do not allow Time Machine to back up their data by default. In the case of Docker there is an option to enable this.
Once you have a complete backup, restart your Mac into recovery mode. On modern Apple Silicon Macs you just shut down the computer. Then turn it back on by pressing the power button until the Mac tells you it is loading startup options.
Use Disk Utility to erase the SSD, and then choose to reinstall macOS.
After the OS is installed it will restart and run Migration Assistant.
Choose to transfer files from Time Machine, and follow the instructions. It will show you all Time Machine backups for connected drives. Choose the latest backup entry for your backup drive, and let Migration Assistant do its thing. You will be left with a refreshed SSD with all your files intact.
The research in this area is nascent, so the optimal frequency for refreshing your SSD cells really depends on how well it is performing, how many writes have been made, and how full it is on average. On my server data drive I rarely write new files. But the data is very important. So I'm planning on refreshing the cells yearly just to be safe.
So how often should you run this process? If your SSD is new or averages under 50% usage, and is under 10 years old, I would do this yearly through that period. As your SSD ages (or if you have a mostly full SSD) it may be better to run it more frequently.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Planning out a long-term strategy for your web project can really pay off. We were recently reminded of that when we were asked to create a mobile app (iOS and Android) for a web-based platform we designed and built several years ago. The platform is Coursabi, a learning platform that ensures growth at each milestone for everyone on your team. You can check it out at https://coursabi.com.
When we created the technical strategy we knew that a mobile app was a likely roadmap item. So we chose ASP.NET Blazor as the core platform technology. It allowed us to build a web app that felt like a single page app (SPA). And it gave us several hosting models: server, WASM (WebAssembly), and hybrid mobile. The most intriguing aspect of the Blazor Hybrid model is that unlike hybrid apps of the past, there is no web server running on the mobile device. Instead, all the C# code is compiled to native .NET code, and the web view (an embedded web browser) is only used to render the user interface. So the app runs as a native mobile app!
We knew that some features of the platform would have to be altered, since the mobile app has no web server. For example, Coursabi supports the SCORM format for external learning content. And due to security restrictions, they needed a host with a trusted root certificate. So moving that out of the platform and handling the routing changes were both necessary, but totally doable.
Another benefit of a mobile app version of the platform is that in many ways it also simplifies the security model, since the app is only running on the local device, whereas a hosted app needs to manage user state, among other concerns.
If you have an ASP.NET-based web application, you can still leverage Blazor Hybrid to turn it into a mobile app. It just needs to first be migrated into a Blazor app. I'd also recommend reviewing your web app for opportunities to make it as mobile-friendly as possible. You don't want your mobile app to look or feel like a website. But those changes not only get you a great mobile app, they also improve how your app looks and feels in a mobile web browser. So you get twice the value.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Sfumato CSS 5.1.0 has been released! This update includes the following changes:
For more information, see the Sfumato project page.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
.NET 9 was officially released during .NET Conf. This release feels like a LTS release; full of speed improvements and quality of life features and refinements, even if you only use it as a drop-in replacement for .NET 8.