The term AI is more of misleading brand name than an accurate description of technology. That distinction is causing real problems for people in many industries who increasingly rely on AI tools. One example of this is the case of Judge Julien Xavier Neals of the District of New Jersey, who had to withdraw his entire opinion after a lawyer politely pointed out that it was riddled with fabricated quotes, nonexistent case citations, and completely backwards case outcomes. You'd think that a judge would be more careful, but then again, if they're not tech-savvy you can see how they could be misled by the promise of AI.
In the 1983 movie War Games, a teenage computer whiz accidentally hacks into a U.S. military supercomputer (named WOPR for "War Operation Plan Response") while searching for video games, unknowingly triggering a potential nuclear crisis. As the system begins running a simulation it mistakes for a real attack, he must race against time to convince the AI to stop a global thermonuclear war.
So yeah, WOPR is what people today consider AI; artificial general intelligence (AGI) to be specific.
Released in 2008, the movie Iron Man features a billionaire inventor Tony Stark who was captured by terrorists, after which he builds a powerful armored suit to escape his captivity and later refines it to fight evil using a digital personal assistant named JARVIS (Just a Rather Very Intelligent System) to coordinate all his technology through voice commands.
JARVIS is also AGI.
OpenAI recently released ChatGPT 5 to mixed reviews. One such review was the blueberry test by Kieran Healy. He asked ChatGPT "How many times does the letter b appear in blueberry" to which ChatGPT responded "The word blueberry has the letter b three times". No matter how hard he tries to convince the AI that there are only 2 letter Bs in the word blueberry, ChatGPT is absolutely positive there are 3.
People expect and believe that AI has human-level or higher intelligence and is able to understand, learn, and apply knowledge in any domain, adapt to new problems, and reason abstractly. That would include knowing how to spell the word blueberry.
What we have with AI today is really a marketing issue. It is not a mechanical turk. It is a transformational technology and it's here to stay. It will improve over time, and it has the potential to make our lives better in many ways. But we need to understand what it is, and more importantly, what it is not.
Then what is AI?
Modern large language models (LLMs) like ChatGPT are trained on vast datasets covering a wide range of human-created content—from websites and books to transcripts, code, and other media. Instead of simply storing this data, the model uses neural networks to learn patterns in language, encoding knowledge as mathematical relationships. When generating responses, the LLM doesn’t look up answers in a database; it predicts the most likely sequence of words based on the context, drawing on statistical patterns it learned during training. LLMs operate through probabilistic prediction rather than direct retrieval, and they lack true understanding or reasoning in the human sense. Without ongoing training on the latest human-generated content, LLMs will become increasingly less useful.
So we're dealing with a simulated intelligence, not an artificial one. It's like the difference between precision and accuracy. You can be very precise, but completely wrong. So it does matter. There is no real intelligence at play here. Which is why the word blueberry has three Bs, the judge's opinion has non-existent citations, and glue was recommended by Google as the solution for making cheese stick better to pizza.
Once people really see that it's a simulation, albeit a very powerful and helpful one, responsible use of the technology will be far less of a problem.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
In today's fast-paced digital world, managing cloud costs has become a critical challenge for businesses of all sizes. With the sheer volume of services AWS provides, it can be difficult to keep track of spending and identify potential areas for optimization. That's where the recently launched AWS Cost Optimization Hub comes into play. It is a powerful tool designed to simplify cost management, drive savings, and give organizations the insights they need to maximize the efficiency of their cloud infrastructure.
“By using the AWS Cost Optimization Hub, organizations have the potential to save up to 30% or more on their cloud bill, real-world users have reported”
The AWS Cost Optimization Hub is a centralized platform within the AWS Management Console, under Billing and Cost Management, that provides customers with a simple, yet potent set of features to help identify and implement cost-saving measures across their AWS environments. It essentially aggregates all AWS cost optimization services into a single, user-friendly dashboard, and allows you to:
Identify underutilized resources: pinpoint instances or services that are being utilized and may no longer be needed.
Analyze spend patterns: gain visibility into your spending across accounts, regions, and services to spot trends and anomalies.
Implement recommendations: access automated, actionable recommendations that align with best practices for reducing unnecessary costs.
Explore pricing options: easily explore alternative pricing options like reserved instances or savings plans, which can provide significant discounts when you commit to long-term usage.
One of the best features of the AWS Cost Optimization Hub is how simple it is to get started. AWS has made it incredibly easy to enable the hub, even for users who are new to cost management.
If you're already an AWS customer, the Cost Optimization Hub is already available to you. All you need to do is log in to your AWS Management Console, navigate to the Cost Management section, and you'll see the Cost Optimization Hub option. No complex setup is required! So, with just a few clicks, you'll have access to your personalized cost optimization dashboard.
Is it really worth it? Yes! You can get significant savings with minimal effort using this AWS tool. But don't just take my word for it.
Below is an example based on one of our smaller client AWS environments.
They recently engaged us to begin the cost optimization process. And so far, just by using this tool, we were able to find almost 20% in savings. Although this does not represent all of the money saving/efficiency changes that we believe can be made, it provided a list of the changes we can make that are easily shared with the customer. We even get an indication of the effort, risk, and reversibilty of each of the changes.
The AWS Cost Optimization Hub is a game-changer for anyone looking to take control of their AWS spending. Not only does it simplify the complex task of cost optimization, but it also provides actionable insights and recommendations that can lead to significant savings. Plus, it’s incredibly easy to enable, even for beginners — making it accessible to a wide range of users across various industries.
So, if you're looking to reduce your AWS cloud costs while optimizing your resource usage, start with the AWS Cost Optimization Hub and begin unlocking the potential savings that await!
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Starting Friday, August 1, you’ll no longer be able to save or manage passwords, use two-factor authentication, or rely on auto-fill features with Microsoft Authenticator as it drops support for its password manager role.
Earlier this summer, Microsoft revealed that they are moving away from using passwords for account authentication and will be using passkeys.
This change is aimed at improving security, since passwords are a security nightmare. A recent survey found that 49% of adults in the US practice poor password habits like reusing passwords or picking easy-to-guess passwords. This leaves users vulnerable to cyberattack, data breaches, ransomware, and more.
But even when using a strong password phishing attacks and social engineering can be used to get you to give up your password to the wrong person. For example, you can be mislead into visiting a website that looks exactly like your bank, and when you try to sign in the bad guys will get your credentials.
Yikes.
Passkeys are not vulnerable to these attacks. In fact, you won't know your passkeys so you can't give them out. And the nefarious server can't perform the negotiation necessary to use your passkey.
The transition to passkeys is happening soon, so it’s a good time to understand how Microsoft will handle this shift and to consider a replacement password manager if necessary.
So, what are passkeys? Passkeys are credentials developed by the Fast Identity Online (FIDO) Alliance, whose underlying technology has been around for decades. They let you use biometrics (like your fingerprint or face) or a device PIN to verify who you are. Think of logging in with Face ID or a fingerprint instead of typing a password. This approach offers stronger protection against guessing and phishing.
Why? Weak passwords are vulnerable to being guessed, but passkeys require both a public and a private (device) key to authenticate. This prevents phishing/social engineering, brute-force, and credential-stuffing attacks.
What if I use a strong password? That's helpful, but password hashes are typically stored on a server so that the password can be verified during login. If the database is breached this provides a way to reverse engineer your password. Unlike passwords, passkeys don't require hashes to be stored on servers. In fact nothing about your passkey is stored on a server. They exist only on your device. And using modern encryption technology, they eliminate the need to remember complex passwords or use a separate password manager.
According to the May 1 Microsoft blog post, Microsoft will soon guide users to set up passkeys as the main way to sign in to their accounts. If you already have a password and a one-time code set up, you’ll only be prompted to use your code to sign in. After logging in, you’ll then be asked to create a passkey. Going forward, signing in will require that passkey.
To add a passkey, open the Microsoft Authenticator app on your mobile device. Choose your account, then select “Set up a passkey.” You’ll first verify your identity, then you’ll be able to create a passkey.
Since Microsoft Authenticator is dropping password support, you’ll want to select a different password manager for websites that use passwords.
Apps like Bitwarden and 1Password are ideal as they provide free and/or affordable plans, and also work with passkeys. A new feature of the passkeys specification provides passkey portability; the ability to transfer passkeys between devices and apps. If you use a manager like Bitwarden or 1Password you essentially already have access to your passkeys across all your devices without that new passkeys feature. But in the future you should be able to export your passkeys from Chrome on Windows, for example, and import them into Safari on a Mac.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Ever since third party browser cookies began to be used to track people across the Internet, and privacy concerns were voiced en masse, web browsers have been providing privacy protection features. Many people take their online privacy for granted, or just don't think about it. But for those of us who value privacy, these tools have been essential.
Some of these web browser features include hiding your network IP address, disallowing third party cookies, masking the browser information sent to servers, third party ad and social media widget blocker plugins (e.g. uBlock Origin), browser fingerprinting protection, and more.
A side benefit of these tools and features is that they can dramatically speed up web browsing, since they block a fair amount of code that is typically only used for advertising purposes. And blocking that code can make you safer online. All that advertising code has a tendency to make you more vulnerable to nefarious exploits.
Not so fast. Enter the Electronic Frontier Foundation (EFF). According to the nonprofit, we're not even close to giving people proper control over their online privacy and tracking prevention.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.
To prove their point, and to provide a valuable service to users, they created the Cover Your Tracks website (https://coveryourtracks.eff.org/).
With a button click you can run a test on your web browser to determine how well it is blocking trackers and whether it can be fingerprinted, and how badly.
Browser fingerprinting is a technique websites use to identify and track users by collecting unique characteristics of their web browser and device (like screen size). This allows websites to identify users even without relying on traditional tracking methods like cookies.
You will be amazed at how much information can be gathered about your web browser and device using incredibly creative tricks. The goal is for the tracking company to gather "bits" of information about you. The more bits, the more unique you become. For example, your display size represents a few bits of uniquely identifiable information. Your display color depth are a few more. The way your browser renders graphics pixels provides bits of information about your graphics hardware. Even the list of fonts available on your computer provide bits of information for your fingerprint. And that's just the beginning.
All of these bits of information combined increase your uniqueness among everyone else they track. You could end up being unique in 1 in 100,000 people, or worse, 1 in 100. Either way you're in a cohort that can easily be tracked and marketed to across the Internet.
One interesting thing I discovered was that by enabling Advanced Tracking and Fingerprinting Protection in Safari, advertisers were able to create a more unique fingerprint because fewer people use that feature, which, ironically, is a valuable "bit" of information for my browser fingerprint!
So, even if you're not a privacy buff or concerned with tracking, it's really interesting to use the EFF Cover Your Tracks tool to see how these companies track you, and how private your browsing truly is.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Website updates are generally in the form of content additions, maintenance releases, and major redesigns. Adding content to a website is great for a variety of reasons, but the lowest hanging fruit. Where website owners fall short most often is in performing regular software maintenance and scheduling major redesigns.
The most important task any website owner should have done is to have scheduled maintenance performed on their platform. There are three primary reasons this is the case:
Most web platforms use some form of a content management system (CMS). This allows easy content updates right from a web browser. The problem is that these systems are complex, and their authentication system provides a way for nefarious people to exploit your website in some way, like push their own agenda by altering your content, or even use your website to perform denial of service or other attacks.
Aside from your credentials being compromised, the biggest vector for providing misuse of your website is a security vulnerability. If you'd like to be sufficiently frightened, visit the global Common Vulnerability and Exposures tracking website https://www.cve.org/ and look up the vulnerabilities of your website platform technology, like WordPress.
I apologize in advance for the nightmares.
Sometimes an update to the underlying platform your website uses can provide performance improvements, like less memory usage for more concurrent visitors, or speed improvements for a better overall user experience (UX). They can also provide new features, like a more seamless login experience, integration with a third party service, or that cookie acceptance banner you've been wanting to add.
If you let your website sit too long, ignoring maintenance releases and security patches, updating it later when you really need to will be a nightmare on a tight timeline. It's much easier (and cheaper!) to make incremental changes along the way, than to try and make lots of them all at once later.
Major redesigns are a secondary concern, but still extremely important. So you should focus on them once you have regular maintenance under control. For the sake of clarity I'll focus on the three primary benefits of major redesigns that fundamentally change the overall appearance of your website.
Sometimes an organization changes their branding, and the change is significant enough that replacing a logo and changing colors on the website aren't enough to meet the new branding requirements. This is the most common and usually the easiest rationale to justify budgeting and planning a redesign.
When your website has been around for a while (which is relative, based on use and audience preferences), it will begin to reflect negatively on your brand image. Your peers will see an easy way to win the confidence war for prospective customers by pointing to how "dated" your website looks compared with theirs, and so on. And you would be hard pressed to argue the point.
Likewise, a dated website will slowly lose visitors to competitors, and give your existing customers a suboptimal and sometimes frustrating experience. For example, a competitor may implement an entirely new search feature that uses an AI large language model (LLM) chat experience that returns much better results, faster, and easier, than your simple text search.
You need to stay in front of potential customers and don't give them a reason to leave. Keep them engaged and satisfied with their user experience. In order to do that, you need to review your website structure and design periodically, get feedback from visitors, and consider a periodic redesign an essential part of your organization's marketing or services.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
During the COVID pandemic, Fynydd partnered with Blue Sequoyah Technologies (https://bluesequoyah.com) to build Coursabi: a cutting edge learning platform (https://coursabi.com). It's an all-in-one learning solution with features like native and SCORM course support, video conferencing, audio and video libraries, forms, reports, events, and more, all presented as a concise learning journey.
We found particular success in the pharmaceutical and healthcare space, and are proud to see these clients renewing each year as they find success in keeping their teams trained, compliant, and most of all, happy.
Alkermes (https://alkermes.com) has been applying deep neuroscience expertise to develop medicines designed to help people living with complex and difficult-to-treat psychiatric and neurological disorders. They're one of our first subscribers and we're proud to say that they have renewed their Coursabi subscription for 2025!
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Picture this: your mission-critical software project is in full swing. Timelines are tight and deliverables are complex. Then, out of nowhere, your lead developer needs extended time off, or perhaps moves on to a new opportunity. On small projects, this is a headache—but on mid to large software projects it can be a full-blown crisis. But you were proactive. You had your software development partner cross-train a backup. Crisis averted.
On larger projects, the complexity of the codebase, the number of integrations, and the coordination required across teams make it essential to have more than one person deeply familiar with its inner workings. A backup developer isn’t just a safety net—they’re a critical part of maintaining project velocity and quality when team members are unavailable. With cross-training there’s always someone who can step up and keep the project moving, ensuring that timelines and business goals are met.
Plus, the benefits extend beyond risk management. Backup developers help foster a culture of collaboration and accountability. When multiple developers understand the system, it encourages better documentation, smarter code reviews, and provides a larger base of technical knowledge. Ultimately, for appropriately sized and mission critical software app and platform projects, investing in a backup developer will protect your investment. It’s peace of mind that your project won’t grind to a halt over a single absence.
Adding one or more backup developers doesn’t have to double your costs or slow down the team. Just be smart about it. Cross-training can be done efficiently by including backup developers in meetings, writing thorough documentation, and pairing them with leads during onboarding and major feature development. This approach ensures knowledge transfer without disrupting velocity or exceeding the budget.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Blazor is a powerful framework from Microsoft used for building interactive web UIs with C# instead of JavaScript. A key feature of Blazor is its flexibility in how applications are hosted and run. The choice of hosting model—Server, WebAssembly, Interactive Auto, or Hybrid—depends entirely on the specific needs of the application, such as scale/performance requirements, offline capabilities, and access to native device features.
But which flavor of Blazor should you use? Well, that depends...
The Blazor Server hosting model is the easiest to set up and use. It runs your application on the server, and when a user interacts with the application, UI events are sent to the server over a real-time (SignalR) connection. The server processes these events, calculates the necessary UI changes, and sends only those small changes back to the client to update the display. This results in a very thin client and a fast initial load time, as almost no application code is downloaded to the browser.
Best reasons to use this hosting model:
In contrast, Blazor WebAssembly runs your entire application directly in the web browser using a WebAssembly-based .NET runtime. The application's C# code, its dependencies, and the .NET runtime itself, are all downloaded to the client. Once downloaded, the application executes entirely on the user's machine, enabling full offline functionality and leveraging the client's processing power for a rich, near-native user experience.
Best reasons to use this hosting model:
The Blazor interactive auto mode allows you to use both server and WebAssembly components in a single project, giving you precise control over how your app behaves.
Best reasons to use this hosting model:
Blazor hybrid is a bit different. It's not used for building web applications. It's allows web developers to use their skills to build mobile apps that run on devices at close to native speed. Microsoft Maui is the core platform, which is native and cross-platform. It normally uses XAML for coding user interfaces. When using Blazor Hybrid, however, you can also use Blazor web components alongside XAML or in place of it.
This model provides the best of both worlds: the ability to build a rich, cross-platform UI with web technologies while having full access to the native capabilities of the device, such as the file system, sensors, and notifications.
Blazor hybrid is the perfect solution for developers looking to create desktop and mobile applications that can share UI components and logic with an existing Blazor web application, or for new mobile app projects.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Many organizations are eager to adopt microservices, sometimes before they even know if they need them. Knowing when they fit a need makes all the difference, and sometimes, not using them is the smarter move.
There are some cases where a microservice architecture is your best bet:
If your project has to support multiple technologies that don’t naturally work together, microservices are a natural fit. Take my experience with the Whitelist Sync Web project:
Originally, this project ran on a 100% .NET backend with a Vue frontend. Later, I migrated to a Node backend with a React frontend. However, I still needed to support SignalR—Microsoft’s real-time communication technology—because client applications in the field were dependent on it. The challenge? SignalR server-side hosting is only supported in C#. Node cannot host a SignalR hub.
Removing SignalR from the project wasn’t an option (unless I was willing to rewrite and redeploy all the client apps—which was out of scope). The solution was to create a separate SignalR microservice: a C# project dedicated to SignalR, communicating with the Node backend through JWT auth and REST endpoints. A reverse proxy routed /hubs/ requests to the SignalR service, while all other traffic hit the React app. The entire setup was managed using Docker Compose.
Microservices can be helpful for extending existing applications. If you want to add new functionality using a different tech stack—or isolate new features for a big team—they let you do this without rewriting your monolith.
Splitting your app into smaller, independently hosted pieces means a failure in one service won’t crash the entire application. Of course, you can build robust error handling into a monolith, but microservices can make fault isolation easier.
Cloud providers offer load balancing for monoliths, but microservices can provide more granular scaling. Just keep in mind, if you don’t have heavy load or growth requirements, this might not be worth the extra complexity and cost.
Microservices let you mix and match tech: imagine a Node backend, a React frontend, and a Python microservice for AI features. Each part of your app can use the best tool for the job.
While microservices have their place, they also come with significant downsides:
Running multiple services means more infrastructure, more devops, and more cloud spend. If your application has low demand, this cost is often unjustified. Starting new projects with a single stack keeps things cheaper and simpler.
Multiple services means more to manage: logging, monitoring, orchestration (hello, Kubernetes), and maintenance. All of this adds to the operational burden.
Using cloud-specific services like Azure Functions ties your app to one provider. Migrating later is possible, but few businesses want to refactor dozens of microservices just to escape rising costs.
Deploying a monolith is straightforward. Microservices require complex CI/CD pipelines and orchestration. Tools like Fynydd fdeploy can help, but they add yet another layer of infrastructure.
With more moving parts, it’s harder to add features, fix bugs, and onboard new team members.
Microservices make authentication harder. Instead of just handling user auth, you now need to manage service-to-service authentication, which can be complicated and error-prone.
Given all these costs, it’s clear: Start simple. For most projects, especially those with low load or a single technology stack, a monolith is the best starting point. Design your application with modularity and future growth in mind, so you can break it into microservices if you ever need to. But don’t jump into microservices unless you’re solving real problems that require them.
Further reading: You Don’t Need Microservices (itnext.io)
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
Fynydd was founded 15 years ago in June of 2010 as an onshore web and software development company. The tech industry has changed quite a bit over that time (I'm looking at you, AI), but all along we've been helping organizations thrive with web and native software development services that support their marketing, products, and operations.
In the beginning we were challenged with navigating a web dominated by Internet Explorer. Google Chrome had only been available for a couple years at that time (and only had about a 9% market share), and each browser had a different (and in many ways incompatible) rendering engine. Building websites that behaved properly across them all resulted in a lot of wasted time. And that was just desktop web browsers.
In that same year Apple only had a single phone: the iPhone 3GS with iOS 4. And Samsung introduced a new device, the Galaxy S. Android had only been available for a couple years at that point, and Google released the 6th version (Froyo) with Adobe Flash support! Egads.
We got off to a running start! One of our notable clients back then was US Bank, who partnered with us to analyze how they could revamp their internal knowledge management architecture to better handle acquisition data, and also create a better support system for bank branch employees. This lead to a two phase engagement that culminated in two printed books outlining the state of their technology and our recommendation for moving forward.
We built marketing websites, sales tools, and online stores for other early clients like EP Henry (hardscaping), Philadelphia Scientific (battery management), and VWR International (life sciences). Some of our earliest clients remain with us today.
As I wrote in the introduction, the industry has changed quite a bit since 2010 and we love it! There's so much to learn and explore, of course, including generative AI, machine learning (ML), and large language models (LLM), and how they can provide new opportunities for organizations to grow and adapt.
So if you need web and app development help, we'd love for you to be in the next chapter of our story!
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!