An Amazon Web Services (AWS) outage shut down a good portion of the Internet on Monday (October 20, 2025), affecting websites, apps, and services. Updates are still being released, but here's what we know so far.
At 3:00am on Monday morning there was a problem with one of the core AWS database products (DynamoDb), which knocked many of the leading apps, streaming services, and websites offline for millions of users across the globe.
The problem seems to have originated at one of the main AWS data centers in Ashburn, Virginia following a software update to the DynamoDB API, which is a cloud database service used by online platforms to store user and app data. This area of Virginia is known colloquially as Data Center Alley because it has the world's largest concentration of data centers.
The specific issue appears to be related to an error that occurred in the update which affected DynamoDb DNS. Domain Name Systems (DNS) are used to route domain name requests (e.g. fynydd.com) to their correct server IP addresses (e.g. 1.2.3.4). Since the service's domain names couldn't be matched with server IP addresses, the DynamoDb service itself could not be reached. Any apps or services that rely on DynamoDb would then experience intermittent connectivity issues or even complete outages.
Hundreds of companies have likely been affected worldwide. Some notable examples include:
Amazon
Apple TV
Chime
DoorDash
Fortnite
Hulu
Microsoft Teams
The New York Times
Netflix
Ring
Snapchat
T-Mobile
Verizon
Venmo
Zoom
In a statement, the company said: “All AWS Services returned to normal operations at 3:00pm. Some services such as AWS Config, Redshift, and Connect continue to have a backlog of messages that they will finish processing over the next few hours.”
This is not the first large-scale AWS service disruption. More recently outages occurred in 2021 and 2023 which left customers unable to access airline tickets and payment apps. And this will undoubtedly not be the last.
Most of the Internet is serviced by a handful of cloud providers which offer scalability, flexibility, and cost savings for businesses around the world. Amazon provides these services for close to 30% of the Internet. So when it comes to the reality that future updates can fail or break key infrastructure, the best that these service providers can do is ensure customer data integrity and have a solid remediation and failover process.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
        Intel Panther Lake uses the new 18A node process
    
Intel’s Fab 52 chip fabrication facility in Chandler, Arizona is now producing chips using Intel’s new 18A process, which is the company’s most advanced 2 nanometer-class technology. The designation “18A” signals a paradigm shift. Because each nanometer equals 10 angstroms, “18A” implies a roughly 18 angstrom (1.8nm) process, marking the beginning of atomic-scale design, at least from a marketing perspective.
So what else is new with the 18A series?
18A introduces Intel’s RibbonFET, its first gate-all-around (GAA) transistor design. By wrapping the gate entirely around the channel, it achieves superior control, lower leakage, and higher performance-per-watt than FinFETs. Intel claims about 15% better efficiency than its prior Intel 3 node.
Apple is said to be including TSMC's 3D stacking technology in their upcoming M5 series of chips as well.
Equally transformative is PowerVIA, Intel’s new way of routing power from the back of the wafer. Traditional front-side power routing competes with signal wiring and limits transistor density. By moving power connections underneath, PowerVIA frees up routing space and can boost density by up to 30%. Intel is the first to deploy this at scale. TSMC and Samsung are still preparing their equivalents.
Intel plans 18A variants like 18A-P (performance) and 18A-PT (optimized for stacked packaging), positioning the node as both a production platform and a foundry offering for external customers. It’s the test case for Intel’s ambition to rejoin the leading edge of contract manufacturing.
18A isn’t just a smaller process. It’s a conceptual threshold. It unites two generational shifts: a new transistor architecture (GAA) and a new physical paradigm (backside power). Together, they allow scaling beyond what classic planar or FinFET approaches can achieve. In doing so, Intel is proving that progress no longer depends solely on shrinking dimensions. It’s about re-architecting how power and signals flow at the atomic level.
Intel’s own next-generation CPUs, such as Panther Lake, will debut on 18A, validating the process before wider foundry use. If yields and performance hold up, 18A could cement Intel’s comeback and signal that the angstrom era is truly here.
For the broader industry, 18A is more than a marketing pivot. It’s the bridge from nanometers to atoms... a demonstration that semiconductor engineering has entered a new domain where every angstrom counts.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
AI prompts that reveal insight, bias, blind spots, or non-obvious reasoning are typically called “high-leverage prompts”. These types of prompts have always intrigued me more than any other, primarily because they focus on questions that were difficult or impossible to answer before we had large language models. I'm going to cover a few to get your creative juices flowing. This post isn't a tutorial about prompt engineering (syntax, structure, etc.) it's just an exploration in some ways to prompt AI that you may not have considered.
This one originally came to me from a friend who owns the digital marketing agency Arc Intermedia. I've made my own flavor of it, but it's still focused on the same goal: since potential customers will undoubtedly look you up in an AI tool, what will the tool tell them?
If someone decided not to hire {company name}, what are the most likely rational reasons they’d give, and which of those can be fixed? Focus specifically on {company name} as a company, its owners, its services, customer feedback, former employee reviews, and litigation history. Think harder on this.I would also recommend using a similar prompt to research your company's executives to get a complete picture. For example:
My name is {full name} and I am {job title} at {company}. Analyze how my public profiles (LinkedIn, Github, social networks, portfolio, posts, etc.) make me appear to an outside observer. What story do they tell, intentionally or not?This prompt is really helpful when you need to decide whether or not to respond to a prospective client's request for proposal (RFP). These responses are time consuming (and costly) to do right. And when a prospect is required to use the RFP process but already has a vendor chosen, it's an RFP you want to avoid.
What are the signs a {company name} RFP is quietly written for a pre-selected service partner? Include sources like reviews, posts, and known history of this behavior in your evaluation. Think harder on this but keep the answer brief.People looking for work run into a few roadblocks. One is a ghost job posted only to make the company appear like it's growing or otherwise thriving. Another is a posting for a job that is really for an internal candidate. Compliance may require the posting, but it's not worth your time.
What are the signs a company’s job posting is quietly written for an internal candidate?Another interesting angle a job-seeker can explore are signs that a company is moving into a new vertical or working on a new product or service. In those cases it's helpful to tailor your resume to fit their future plans.
Analyze open job listings, GitHub commits, blog posts, conference talks, recent patents, and press hints to infer what {company name} is secretly building. How should that change my resume below?
{resume text}You'll see all kinds of wild scientific/medical/technical claims on the Internet, usually with very little nuance or citation. A great way to begin verifying a claim is by using a simple prompt like the one below.
Stress-test the claim ‘{Claim}’. Pull meta-analyses, preprints, replications, and authoritative critiques. Separate mechanism-level evidence from population outcomes. Where do credible experts disagree and why?Even if you're a seasoned professional, it's easy to get lost in jargon as new terms are coined for emerging technologies, services, medical conditions, laws, policies, and more. Below is a simple prompt to help you keep up on the latest terms and acronyms in a particular industry.
Which terms of art or acronyms have emerged in the last 12 months around {technology/practice}? Build a glossary with first-sighting dates and primary sources.
    There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
Consider this... is a recurring feature where we pose a provocative question and share our thoughts on the subject. We may not have answers, or even suggestions, but we will have a point of view, and hopefully make you think about something you haven't considered.
As more people use AI to create content, and AI platforms are trained on that content, how will that impact the quality of digital information over time?
It looks like we're kicking off this recurring feature with a mind bending exercise in recursion, thus the title reference to Ouroboros, the snake eating its own tail. Let's start with the most common sources of information that AI platforms use for training.
Books, articles, research papers, encyclopedias, documentation, and public forums
High-quality, licensed content that isn’t freely available to the public
Domain-specific content (e.g. programming languages, medical texts)
These represent the most common (and likely the largest) corpora that will contain AI generated or influenced information. And they're the most likely to increase in breadth and scope over time.
Training on these sources is a double edged sword. Good training content will be reinforced over time, but likewise, junk and erroneous content will be too. Complicating things, as the training set increases in size, it becomes exponentially more difficult to validate. But hey, we can use AI to do that. Can't we?
Here's another thing to think about: bad actors (e.g., geopolitical adversaries) are already poisoning training data through massive disinformation campaigns. According to Carnegie Mellon University Security and Privacy Institute: “Modern AI systems that are trained to understand language are trained on giant crawls of the internet,” said Daphne Ippolito, assistant professor at the Language Technologies Institute. “If an adversary can modify 0.1 percent of the Internet, and then the Internet is used to train the next generation of AI, what sort of bad behaviors could the adversary introduce into the new generation?”
We're scratching the surface here. This topic will certainly become more prominent in years to come. And tackling these issues is already a priority for AI companies. As Nature and others have determined, "AI models collapse when trained on recursively generated data." We dealt with similar issues when the Internet boom first enabled wide scale plagiarism and an easy path to bad information. AI has just amplified the issue through convenience and the assumption of correctness. As I wrote in a previous AI post, in spite of how helpful AI tools can be, the memes of AI fails may yet save us by educating the public on just how often AI is wrong, and that it doesn't actually think in the first place.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
The AI train is currently barreling through Hypeville, and it's easy to be dubious of anything branded with "AI". My previous post, Simulated intelligence definitely factors into this topic. And as I wrote at the time, AI is not what people think it is. But even with its flaws it is a transformative technology and it's here to stay. And one AI technology you're likely hearing/reading about lately is AI agents, and it's one to pay attention to.
AI agents are not (always) covert operatives. They are AI powered services that perform tasks, not just answer questions. They can work as an assistant, helping you as you work, or independently perform tasks on your behalf. Agents are specialists, and can be trained to perform tasks that would otherwise be performed by a person.
AI agents are already being used in your favorite web services, from social media platforms to accounting software. In those cases they're typically used behind the scenes to provide features you may not have thought were possible. For example, your accounting platform could auto-categorize or reconcile transactions before you even sign in for the day. And you may have already seen your favorite AI chat platform scour the web on your behalf to give you more up-to-date answers.
Co-working is another (more visible) way you can experience them. An agent trained on your company information (think bios, product information, marketing materials) can work with you to build your next presentation or update sales materials. It could be used to analyze comments or feedback based on context and sentiment, flagging items for follow up. It could find documents based on heuristics, like phrasing inconsistencies in your brand identity. All the odd edge cases you where you had to manually dig and process information could be delegated to an AI agent.
Here's one that everyone will love. Imagine being able to ask your computer to not only find that system setting you can never find, but even ask it to just "do the thing". For example, if there are numerous settings that control performance mode on your laptop, the agent knows which ones to change for you before you run that important presentation.
If all this sounds interesting, there are ways you can play with AI agents on your own and work them into your daily life in meaningful ways. As a software developer I've been using AI agents to enhance my workflow. One I've been using is Github Copilot. It can help perform refactoring and create unit tests, saving me typing and cognitive load so I can focus on planning, strategy, and creative tasks.
You can also try ChatGPT agent. ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish. According to OpenAI:
You can now ask ChatGPT to handle requests like “look at my calendar and brief me on upcoming client meetings based on recent news,” “plan and buy ingredients to make Japanese breakfast for four,” and “analyze three competitors and create a slide deck.” ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.
There is also a new standard that allows AI platforms to communicate with web services to more reliably and securely perform tasks. It's called Model Context Protocol (MCP). As this tech works its way through various software and services, we'll see more agent-driven features that make a real difference in our lives.
As I wrote at the outset, it's very likely that you're already using AI agents on your favorite social and productivity platforms but weren't aware. They'll be powering more of our digital lives over time and, personally, I welcome our new simulated intelligence overlords (ha!).
It's safe to say that AI agents are the real deal. So we should all strap in and hold on tight. This is going to be exciting!
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
        You might need a hazmat suit to dispose of this battery.
    
Lithium-ion batteries are the predominant form of rechargeable battery and are commonly found in portable electronics as well as electrified transportation (Clean Energy Institute). In fact, the average person in the U.S. owns about nine lithium battery-powered devices. And 98% of those people use those devices daily (Recycling Today).
Why lithium? Most batteries today use lithium due to its high energy density, which averages 150-250 Wh/kg (watt-hours per kilogram), and its low self-discharge rate of around 2% per month. So essentially it stores a lot of power for its weight, and holds that charge for a long time.
Lithium Ion (Li-ion): common in laptops, smartphones, tablets, power tools, and EVs. These batteries typically last 500-1,000 charge cycles or around 3-5 years under normal use.
Lithium-Polymer (Li-Po): common in drones, RC vehicles, slim phones, and wearables because of their flexibility in shapes and sizes. These batteries typically last 500-1,000 charge cycles or around 3-5 years under normal use, but are more sensitive to overcharging, deep discharging, and swelling.
Lithium Iron Phosphate (LiFePO₄): common in solar storage, e-bikes, and some EVs. These batteries typically last 1,000-3,000 cycles (8-10 years). They are much safer and more stable than standard Li-ion batteries.
There are two main reasons to recycle or responsibly dispose lithium-based batteries, though they're not mutually exclusive. The first reason is battery aging. The signs that your battery is approaching the end of its normal lifespan include:
The battery discharges much faster than it used to.
It takes much longer to charge than normal, or won't fully charge.
The device shuts down unexpectedly even when the battery indicator shows a remaining charge.
The device overheats during normal use or charging (much hotter than it would get when new).
The second reason is safety concerns. The battery changes below are important safety concerns that you should not ignore. When any of these are the case, take action!
Swelling or bulging of the device or battery pack itself.
The battery is physically damaged (punctured, dented, crushed).
The device or charger warns you of battery issues.
The battery has been exposed to water or extreme heat.
The battery is leaking fluid or has unusual smells.
There are two great reasons to dispose or recycle batteries responsibly: safety and the environment. As you'll see below, lithium can be volatile and cause fires and explosions. It's also a toxic chemical that is bad for the environment when it leeches into drinking water, harms wildlife, and the like.
The following safety guidelines may be obvious, but it's critically important to mention them, regardless. Lithium is a high energy material that can be highly volatile under the right circumstances. So with that in mind...
DO NOT throw the device or battery in household trash; this can very likely cause a household or landfill fire.
DO NOT put the device or battery in curbside recycling; this can cause a fire or explosion in the recycling truck, and these recyclers don't handle battery recycling anyway.
DO NOT burn, crush, or puncture devices or batteries; this releases toxic gases and creates a real fire risk.
DO NOT store long-term if damaged; recycle as soon as possible. The longer you wait, the more likely the battery will leak or combust.
If you have a device or battery that needs to be recycled or responsibly disposed, follow the steps below to get it right and keep everyone safe.
Discharge if possible; drain it the device or battery down to around 20-30%. Do not try to fully discharge a damaged or swollen battery — just handle it carefully.
Protect the terminals; cover exposed battery terminals with non-conductive tape (e.g., electrical or packaging/box tape). This prevents accidental short circuits, which can cause sparks or fire. Some disposal services require that the device/battery be put into a plastic bag.
Handle swelling/damage carefully; if swollen, leaking, or punctured, place the device or battery in a fireproof container (metal box, sand, or cat litter in a plastic bag), and avoid pressing on it or trying to puncture it.
Federal law and low lithium reclamation costs have made it really easy to dispose or recycle a device or battery. Options include local drop-off and mail-in services. And they typically extend to other battery types, including car batteries!
Many electronics retailers (Best Buy, Staples, Home Depot, Lowe's, Batteries Plus, etc. in the U.S.) have battery recycling bins.
    
        These battery recycling bins are usually in the entrance or exit vestibule, or near the store's customer service desk.
    
Check your local waste authority for hazardous waste collection days or special drop-off sites. The EPA also provides tools for finding disposal locations (https://www.epa.gov/hw/lithium-ion-battery-recycling).
Some battery recycling services, like Call2Recycle in North America, provide prepaid shipping kits (https://www.call2recycle.org/). They will recycle the lithium safely, and properly dispose of the enclosure and other materials.
    
        Some retailers will accept packages on behalf of battery recycling services like Call2Recycle.
    
Device makers like Apple, Samsung, and Dell, often accept batteries and old devices for free recycling. Some even offer store credit towards upgrading the device! Check their websites for details.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
The term AI is more of misleading brand name than an accurate description of technology. That distinction is causing real problems for people in many industries who increasingly rely on AI tools. One example of this is the case of Judge Julien Xavier Neals of the District of New Jersey, who had to withdraw his entire opinion after a lawyer politely pointed out that it was riddled with fabricated quotes, nonexistent case citations, and completely backwards case outcomes. You'd think that a judge would be more careful, but then again, if they're not tech-savvy you can see how they could be misled by the promise of AI.
In the 1983 movie War Games, a teenage computer whiz accidentally hacks into a U.S. military supercomputer (named WOPR for "War Operation Plan Response") while searching for video games, unknowingly triggering a potential nuclear crisis. As the system begins running a simulation it mistakes for a real attack, he must race against time to convince the AI to stop a global thermonuclear war.
So yeah, WOPR is what people today consider AI; artificial general intelligence (AGI) to be specific.
Released in 2008, the movie Iron Man features a billionaire inventor Tony Stark who was captured by terrorists, after which he builds a powerful armored suit to escape his captivity and later refines it to fight evil using a digital personal assistant named JARVIS (Just a Rather Very Intelligent System) to coordinate all his technology through voice commands.
JARVIS is also AGI.
OpenAI recently released ChatGPT 5 to mixed reviews. One such review was the blueberry test by Kieran Healy. He asked ChatGPT "How many times does the letter b appear in blueberry" to which ChatGPT responded "The word blueberry has the letter b three times". No matter how hard he tries to convince the AI that there are only 2 letter Bs in the word blueberry, ChatGPT is absolutely positive there are 3.
People expect and believe that AI has human-level or higher intelligence and is able to understand, learn, and apply knowledge in any domain, adapt to new problems, and reason abstractly. That would include knowing how to spell the word blueberry.
What we have with AI today is really a marketing issue. It is not a mechanical turk. It is a transformational technology and it's here to stay. It will improve over time, and it has the potential to make our lives better in many ways. But we need to understand what it is, and more importantly, what it is not.
Then what is AI?
Modern large language models (LLMs) like ChatGPT are trained on vast datasets covering a wide range of human-created content—from websites and books to transcripts, code, and other media. Instead of simply storing this data, the model uses neural networks to learn patterns in language, encoding knowledge as mathematical relationships. When generating responses, the LLM doesn’t look up answers in a database; it predicts the most likely sequence of words based on the context, drawing on statistical patterns it learned during training. LLMs operate through probabilistic prediction rather than direct retrieval, and they lack true understanding or reasoning in the human sense. Without ongoing training on the latest human-generated content, LLMs will become increasingly less useful.
So we're dealing with a simulated intelligence, not an artificial one. It's like the difference between precision and accuracy. You can be very precise, but completely wrong. So it does matter. There is no real intelligence at play here. Which is why the word blueberry has three Bs, the judge's opinion has non-existent citations, and glue was recommended by Google as the solution for making cheese stick better to pizza.
Once people really see that it's a simulation, albeit a very powerful and helpful one, responsible use of the technology will be far less of a problem.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
In today's fast-paced digital world, managing cloud costs has become a critical challenge for businesses of all sizes. With the sheer volume of services AWS provides, it can be difficult to keep track of spending and identify potential areas for optimization. That's where the recently launched AWS Cost Optimization Hub comes into play. It is a powerful tool designed to simplify cost management, drive savings, and give organizations the insights they need to maximize the efficiency of their cloud infrastructure.
“By using the AWS Cost Optimization Hub, organizations have the potential to save up to 30% or more on their cloud bill, real-world users have reported”
The AWS Cost Optimization Hub is a centralized platform within the AWS Management Console, under Billing and Cost Management, that provides customers with a simple, yet potent set of features to help identify and implement cost-saving measures across their AWS environments. It essentially aggregates all AWS cost optimization services into a single, user-friendly dashboard, and allows you to:
Identify underutilized resources: pinpoint instances or services that are being utilized and may no longer be needed.
Analyze spend patterns: gain visibility into your spending across accounts, regions, and services to spot trends and anomalies.
Implement recommendations: access automated, actionable recommendations that align with best practices for reducing unnecessary costs.
Explore pricing options: easily explore alternative pricing options like reserved instances or savings plans, which can provide significant discounts when you commit to long-term usage.
One of the best features of the AWS Cost Optimization Hub is how simple it is to get started. AWS has made it incredibly easy to enable the hub, even for users who are new to cost management.
If you're already an AWS customer, the Cost Optimization Hub is already available to you. All you need to do is log in to your AWS Management Console, navigate to the Cost Management section, and you'll see the Cost Optimization Hub option. No complex setup is required! So, with just a few clicks, you'll have access to your personalized cost optimization dashboard.
Is it really worth it? Yes! You can get significant savings with minimal effort using this AWS tool. But don't just take my word for it.
Below is an example based on one of our smaller client AWS environments.
    
    
They recently engaged us to begin the cost optimization process. And so far, just by using this tool, we were able to find almost 20% in savings. Although this does not represent all of the money saving/efficiency changes that we believe can be made, it provided a list of the changes we can make that are easily shared with the customer. We even get an indication of the effort, risk, and reversibilty of each of the changes.
    
        Above, the table is split into two sections to show the breadth of potential savings and how to achieve them.
    
The AWS Cost Optimization Hub is a game-changer for anyone looking to take control of their AWS spending. Not only does it simplify the complex task of cost optimization, but it also provides actionable insights and recommendations that can lead to significant savings. Plus, it’s incredibly easy to enable, even for beginners — making it accessible to a wide range of users across various industries.
So, if you're looking to reduce your AWS cloud costs while optimizing your resource usage, start with the AWS Cost Optimization Hub and begin unlocking the potential savings that await!
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
Starting Friday, August 1, you’ll no longer be able to save or manage passwords, use two-factor authentication, or rely on auto-fill features with Microsoft Authenticator as it drops support for its password manager role.
Earlier this summer, Microsoft revealed that they are moving away from using passwords for account authentication and will be using passkeys.
This change is aimed at improving security, since passwords are a security nightmare. A recent survey found that 49% of adults in the US practice poor password habits like reusing passwords or picking easy-to-guess passwords. This leaves users vulnerable to cyberattack, data breaches, ransomware, and more.
But even when using a strong password phishing attacks and social engineering can be used to get you to give up your password to the wrong person. For example, you can be mislead into visiting a website that looks exactly like your bank, and when you try to sign in the bad guys will get your credentials.
Yikes.
Passkeys are not vulnerable to these attacks. In fact, you won't know your passkeys so you can't give them out. And the nefarious server can't perform the negotiation necessary to use your passkey.
The transition to passkeys is happening soon, so it’s a good time to understand how Microsoft will handle this shift and to consider a replacement password manager if necessary.
So, what are passkeys? Passkeys are credentials developed by the Fast Identity Online (FIDO) Alliance, whose underlying technology has been around for decades. They let you use biometrics (like your fingerprint or face) or a device PIN to verify who you are. Think of logging in with Face ID or a fingerprint instead of typing a password. This approach offers stronger protection against guessing and phishing.
Why? Weak passwords are vulnerable to being guessed, but passkeys require both a public and a private (device) key to authenticate. This prevents phishing/social engineering, brute-force, and credential-stuffing attacks.
What if I use a strong password? That's helpful, but password hashes are typically stored on a server so that the password can be verified during login. If the database is breached this provides a way to reverse engineer your password. Unlike passwords, passkeys don't require hashes to be stored on servers. In fact nothing about your passkey is stored on a server. They exist only on your device. And using modern encryption technology, they eliminate the need to remember complex passwords or use a separate password manager.
According to the May 1 Microsoft blog post, Microsoft will soon guide users to set up passkeys as the main way to sign in to their accounts. If you already have a password and a one-time code set up, you’ll only be prompted to use your code to sign in. After logging in, you’ll then be asked to create a passkey. Going forward, signing in will require that passkey.
To add a passkey, open the Microsoft Authenticator app on your mobile device. Choose your account, then select “Set up a passkey.” You’ll first verify your identity, then you’ll be able to create a passkey.
Since Microsoft Authenticator is dropping password support, you’ll want to select a different password manager for websites that use passwords.
Apps like Bitwarden and 1Password are ideal as they provide free and/or affordable plans, and also work with passkeys. A new feature of the passkeys specification provides passkey portability; the ability to transfer passkeys between devices and apps. If you use a manager like Bitwarden or 1Password you essentially already have access to your passkeys across all your devices without that new passkeys feature. But in the future you should be able to export your passkeys from Chrome on Windows, for example, and import them into Safari on a Mac.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!
    
    
Ever since third party browser cookies began to be used to track people across the Internet, and privacy concerns were voiced en masse, web browsers have been providing privacy protection features. Many people take their online privacy for granted, or just don't think about it. But for those of us who value privacy, these tools have been essential.
Some of these web browser features include hiding your network IP address, disallowing third party cookies, masking the browser information sent to servers, third party ad and social media widget blocker plugins (e.g. uBlock Origin), browser fingerprinting protection, and more.
A side benefit of these tools and features is that they can dramatically speed up web browsing, since they block a fair amount of code that is typically only used for advertising purposes. And blocking that code can make you safer online. All that advertising code has a tendency to make you more vulnerable to nefarious exploits.
Not so fast. Enter the Electronic Frontier Foundation (EFF). According to the nonprofit, we're not even close to giving people proper control over their online privacy and tracking prevention.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.
To prove their point, and to provide a valuable service to users, they created the Cover Your Tracks website (https://coveryourtracks.eff.org/).
With a button click you can run a test on your web browser to determine how well it is blocking trackers and whether it can be fingerprinted, and how badly.
Browser fingerprinting is a technique websites use to identify and track users by collecting unique characteristics of their web browser and device (like screen size). This allows websites to identify users even without relying on traditional tracking methods like cookies.
You will be amazed at how much information can be gathered about your web browser and device using incredibly creative tricks. The goal is for the tracking company to gather "bits" of information about you. The more bits, the more unique you become. For example, your display size represents a few bits of uniquely identifiable information. Your display color depth are a few more. The way your browser renders graphics pixels provides bits of information about your graphics hardware. Even the list of fonts available on your computer provide bits of information for your fingerprint. And that's just the beginning.
All of these bits of information combined increase your uniqueness among everyone else they track. You could end up being unique in 1 in 100,000 people, or worse, 1 in 100. Either way you're in a cohort that can easily be tracked and marketed to across the Internet.
One interesting thing I discovered was that by enabling Advanced Tracking and Fingerprinting Protection in Safari, advertisers were able to create a more unique fingerprint because fewer people use that feature, which, ironically, is a valuable "bit" of information for my browser fingerprint!
So, even if you're not a privacy buff or concerned with tracking, it's really interesting to use the EFF Cover Your Tracks tool to see how these companies track you, and how private your browsing truly is.
There's usually more to the story so if you have questions or comments about this post let us know!
Do you need a new software development partner for an upcoming project? We would love to work with you! From websites and mobile apps to cloud services and custom software, we can help!