Yes, you need to duplicate your frontend business logic on the server
Yes, even if you use WASM or Pyscript
Summary
I witness more and more code that historically has been on the backend going front.
It's ok to do it. As long as you still duplicate it on the server. Otherwise, corrupted data will eventually find its way into your side of the fence.
What is old is new again
It's an article I didn't think I would have to write in 2024, because the best practice of never trusting the client was already Gospel in the 90'.
Of course, I didn't figure that one out at the time, someone explained it to me. It's only fair we pass the message as well to the new generation.
Today the web browser is so powerful and the frameworks that make use of it so rich that there is a whole generation of developers that mostly spend their time doing awesome things in there.
Your SPA will do routing there, you may want to check for permissions here as well to speed things up, and it's fair to cache whether you are authenticated or not for perfs reasons.
The problem is that I see more and more code that only do these things on the client side. And that's the road to pain.
Because you can't control the client. Your code runs on somebody else's machine.
Your code will fail to run more often than you think
Given we spend so much time on mobile, where browsers are very limited, it's easy to forget that JS is a totally optional feature of the web.
Yet, your backend code will be exercised without any frontend code being executed more often than you think.
First, you have non-standard browser interactions, like extensions performing requests and bypassing your front, addons disabling some JS features, user disabling JS altogether, and screen readers that just don't have JS.
You may not target those, but they will visit your site, and they will send requests to your backend.
Then you have the clients that are not web browsers:
Email clients following links.
Chats and social networks providing previews of sites in messages.
Bookmarking services that cache the page.
Links in software like Excel or Google Sheets.
Web crawlers of all kinds.
Nerds that are trying to automate things with Python. I am nerds.
Anti-virus software and corporate proxies that mess up with requests.
Corrupted side code (E.G: because of partial loading) executing nonetheless.
They will work around your business logic on the client one day, and if you don't replicate it on the server, they will mess up your data into states you didn't expect and from which you cannot recover.
You don't know what you don't know
If you don't restrict the input your server accepts to a manageable subset, what you are technically welcoming is infinite complexity.
The only limits to the number of variations of the incoming data are the law of physics. You have no idea what can be produced, nor the effect it will have on your system.
There are some excel files that can crash hardware equipment if sent by email:
While the universe entropy cannot be fully captured by our imagination, we can restrict the problem to a smaller one by defining what we do want to accept.
This is why we escape SQL to avoid injections, put an upper bound on data size, declare allowed intervals for pagination, etc.
And those checks will be completely ignored, from time to time, if they only exist on the frontend.
Not to mention attackers will have a blast since they don't have to play according to any of the rules. And bots just don't care, they are oblivious to your beautiful react hooks, they come, they attempt stuff, and you better not accept that data into your DB. Check your logs, they likely have a GET request for wp-admin.php
even if you use node.
In short, you can't assume anything about the client, the nature or shape of the data, or the intention behind the interaction.
This is why games like League of Legends have a custom client with anti-cheat software that is close to being a rootkit: they want full control of the machine. And despite that, they still duplicate checks on the servers. And yet, still have to fight cheaters every day, and they don't always win.
You are making a list and checking it twice
Any check that is about security or integrity should exist on the backend. They may exist on the frontend as well, to make for a nicer UI, but they have to exist on the server.
This includes:
Authentication and identification.
Permissions and belonging checks.
Data boundary checks, sanitization, and validation.
Any scaling value that will affect resource consumption.
Sometimes, it means dealing with things that are adjacent to those, like routing. Indeed, while in theory routing could be mostly client side for a SPA, some routes may or may not be available in some contexts or depend on permissions.
Yes, that's a lot of duplication, but it's not hard, although it's certainly tedious.
That's also why P2P apps are so difficult to get right: they don't have a central source of truth. Bitcoin solves that by having a majority vote on what the state is. Git doesn't solve it and lets the humans resolve it manually, assuming they all work together and trust each other.
But you do have a source of truth, that's the server, and it should check everything.