A bit misleading; M/S and Zamarin's CLR libraries themselves are not native to any given O/S, but the runtimes are. We could say the same thing about PHP, Swift, JRE, or even Flash
https://learn.microsoft.com/en-us/dotnet/maui/what-is-maui
Fair enough. We are probably applying different interpretations of what should be considered native.
The .NET runtime for every platform but Windows is Mono. Mono is an abstraction layer that sits between your code and the runtime environment the respective operating system provides to the application.
I don't consider any abstraction layer that's not actually necessary as "native."
Here's why:
As far as I am aware, in the case of iOS' implementation of Mono, that abstraction layer is essentially a massive pile of wrapper classes and functions. Some of them simply forward function calls and their parameters to the OS' runtime environment and return the return values unchanged, others have to do some additional work to translate given data into different formats/encodings before they can be forwarded or returned. In the case of the pure wrapper classes and functions, that's at best a nuisance, wasting a few CPU cycles for each call and a megabyte or five of extra memory for the additional library code — especially because they can't be optimized away by the compiler since Mono is a precompiled library. In cases where given data needs to be translated first, however, it's outright wasteful every way you look at it.
On every computing platform except Windows PCs, but especially on mobile devices like phones and tablets, performance and energy consumption is absolutely crucial for a great user experience. As a software developer, you are obligated to respect your user and their devices. Don't waste their devices' bandwidth and persistent memory with unnecessarily large app payloads. Don't waste their devices' RAM with unnecessary abstraction layer libraries. Don't waste their devices' battery live with tons of unnecessary CPU cycles and memory allocations. And maybe most crucially: don't waste their time by slowing your app unnecessarily.
So it simply doesn't matter whether your C# code gets compiled into native ARM assembly code or byte code or if it's even merely JIT interpreted, and it simply doesn't matter whether the Mono runtime library is native ARM assembly code or not. It's still a massive waste of memory and of performance.
And don't get me started on the memory and performance benefits of ARC over C#'s garbage collection nonsense.
But from where I'm sitting, that's not even the worst part. The worst part is actually the use of frameworks like Xamarin at al.
I mentioned above that it's your obligation to—among other things—not waste a user's time. Doing so does not only mean that your app should run as performant as possible. It also—and much more crucially—means that you should not force them to have to learn and navigate your non-native user interface.
You want to ship an Android app? Then make it look, feel, and behave like an Android app. Wanna ship something for iOS? Make sure it looks, feels, and behaves like an iOS app. Windows? macOS? Linux? Same thing.
No, that doesn't mean that every app needs to look alike within each platform's environment. There's plenty of room for creativity and ample opportunity to set your branding apart from all the others out there.
But what this DOES mean is that gestures should always work the same and lead to the same result. It means that the look and feel of a scroll container should feel the same in your app as it does natively in every other app. It means that your app should be true to each platform's (view) navigation concept and structure. View modality should behave the same. Keyboard shortcuts. Menu placements and labeling. Etc. etc. etc.
Xamarin et al make this utterly impossible. And they do so by design. Because the whole point of these frameworks is to enable the developer to code once and deploy everywhere. That can only really be accomplished by reimplementing an entire UI framework that's independent from each targeted platform's native UI framework. And not only are those reimplementations yet another massive waste of payload space and RAM, they're also not optimized for the platform they're supposed to run on, and they don't feel "at home" there, either.
Most banal example? Open this new Roon ARC app on your iPhone. Tap on the settings icon, an album cover, anything, really, that takes you to a new screen. See that transition animation? It's much too abrupt. The animation duration is a tad too short, and the animation curve that is applied has an entirely different "ramp" than the kind of damping that Apple's native navigation controller uses. That makes this transition appear hectic, robotic, much less organic than Apple's native transition animation. And the next screen doesn't slide over the previous one as an opaque card, as is standard on iOS, but with an entirely transparent background, sliding over the existing content while that existing content fades into the background. The result is that, for the duration of that transition, your eye and brain has to process one single screen with intermingling texts and images instead of the usual visually clearly separate cards.
This—and countless other issues I could list—creates a user experience that feels out of place on the platform. Every deviation from a platform's widely established UI/UX norms create a form of cognitive dissonance for the user. Muscle memory no longer applies. Known gestures lead to unexpected results. The brain has to invest more effort to locate information and functionality. All this creates frustration, subconsciously at best, conscious at worst. The user will connect that first impression and the resulting frustration with your brand, and you'll spend a lot of time and money down the road dealing with the result from that.
This can easily be avoided by using frameworks that are native to the platform you want to deploy to. But this of course also means that you can no longer "develop once, deploy everywhere," negating the entire raison d'être for these frameworks.
If you care for your own brand, you want a quality product. A quality product means that you'll have to go purely native. That will cost extra time and money. But from the user's perspective, and that of your business' bottom line down the road, the result will speak for itself.
The alternative is to save time and money upfront by going "develop once, deploy everywhere," but with lasting negative impacts for your product, your brand, and user loyalty.
Yes, I obviously feel very passionate about this issue. And so it is probably not immediately clear that I think that everybody should decide for themselves what technologies they want to base their products on. Although I would argue quite vehemently that one will be hard-pressed to find good objective arguments for it, there are plenty of subjectively very valid reasons to use Xamarin and other frameworks as the basis for a product. To each their own, use whatever you think works best for you.
But native it is not, and a great user experience it does not make.
Good enough for some businesses?
Apparently.
But not as good as it
could be; and that was the entire point I wanted to make with the original post.
As an aside:
You compared Swift to .net, PHP, JRE, and Flash.
That honestly surprised me quite a bit, because: Swift is a language, the others are frameworks.
When Swift was first introduced, a lot of folks in the industry misinterpreted Apple's presentation and assumed that Swift would be a language that comes with a separate framework that sits on top of the Objective-C runtime environment. Kinda like C# is the language used to write code when you want to use .net, which in turn sits on top of Mono, which in turn sits on top of the Objective-C framework. (At least in the case of Mono for macOS and iOS.)
That is not the case. Swift is merely a language.
Wherever code that's written in Swift needs to work together with code that's written in Objective-C—and that includes precompiled libraries—bridging headers provide LLVM with the necessary means to interpret how it has to handle each end's respective calls. Similarly as it works with mixed codebases that are partly written in C or C++ and Objective-C. In those cases, bridging headers weren't necessary because these languages already come with header files. Swift doesn't use headers, so they have to be supplied by the developer when needed.
In the case of macOS and iOS applications, LLVM compiles Swift into CPU instructions for each and every CPU that the application should support. Yes, that means that it actually produces different binaries for different iPhones and Macs, even though they technically share the same operating systems. This machine code can optionally be unified into a single binary, usually for debug and AdHoc builds, or stripped into separate binaries that then get deployed to the respective target devices through the AppStore's app bundling process. When you download an app for your iPhone 14, it gets a different binary than an iPhone 14 Pro will get, which gets a different binary than an iPhone 12, and so forth.
That's as close to the actual hardware as you can get. LLVM is even able to optimize the instructions better than you could ever do by hand by writing in assembler. The only thing that's precompiled are the system frameworks like Foundation, UIKit, AVKit, StoreKit, CloudKit, etc. — and those are all precompiled into the same CPU-specific binaries as the application eventually will be.
There are no wrappers, no additional abstraction layers whatsoever.
So I'm not sure why exactly you mentioned Swift alongside these frameworks, but I did find it a bit surprising.