"We're Done with Teams!": Why Europe's Break with Big Tech Matters for Developers

Image Source: depositphotos.com

As European governments from Denmark to Germany phase out Microsoft in favor of open-source alternatives, a quiet revolution is reshaping the global tech landscape. What does this mean for developers, and is digital sovereignty the future or just political theater? We spoke with Nikolay Gushchin, Senior Software Engineer at Marks & Spencer and Toptal consultant with over 8 years of experience building scalable systems for Fortune 500 companies, to understand how this shift impacts the development community and what it signals about the future of technology.

You've worked with major corporations and seen their tech stacks firsthand. How realistic is it for governments to actually replace Microsoft's ecosystem with open-source alternatives in 2024?

In my experience, completely swapping out Microsoft’s ecosystem for open-source alternatives in one big swoop is ambitious, maybe too ambitious for most governments in the short term. Large organisations (be it a Fortune 500 company or a government ministry) have decades of processes, custom tools, and user habits built around Microsoft Windows, Office, Exchange, Teams, and so on. Replacing all that with Linux, LibreOffice, Nextcloud, etc., is possible. We’ve seen examples like the German state of Schleswig-Holstein moving 30,000 workers off Microsoft Office to LibreOffice and planning to migrate to Linux. But doing this across an entire government in a really short term means facing a lot of inertia and technical friction. Consider Munich’s famous attempt: they did migrate ~12,000 desktops to Linux and saved millions, but a later political shift caused a rollback. That underscores that it’s not just a tech challenge, it’s also about long-term political commitment and the will that is required to move forward.


From a pure tech perspective, the open-source alternatives today are more mature than they were a decade ago. LibreOffice can handle Word docs and spreadsheets, Linux desktops are far more user-friendly, and many government workflows are now web-based (so the underlying OS or office suite matters a bit less). So, technically, it’s more realistic now than it was in, say, 2010. We have success stories, France’s Gendarmerie saved millions switching 37,000 PCs to Linux, and numerous cities (Barcelona, Toulouse, etc.) run open-source stacks. However, the realism question comes down to scope and timeline. Small-scale or phased migrations can work (migrate one department at a time, provide lots of support), but a rapid, full replacement is likely to hit snags. In corporate projects I’ve led, even migrating a single large application or adopting a new framework took significant effort in planning, retraining, and troubleshooting. Replacing an entire ecosystem is exponentially harder. So I’d say it’s possible for a determined government to start that transition now, but expecting it to be fully done and smooth and quick is not very realistic. There will be bumps, and most governments will move gradually and possibly maintain some Microsoft systems as a fallback during the transition.

Europe is citing "digital sovereignty" as the main driver for this shift. As someone who's built systems for both European and American clients, how much does data residency and control actually matter in practice?

It matters a lot, especially in Europe, but perhaps in different ways than the political soundbites suggest. “Digital sovereignty” is about control: ensuring that your country’s data and infrastructure aren’t at the mercy of a foreign company or government. In practical terms, for a developer, this often translates to requirements like: “Please host our data in an EU data center,” or “We can’t use that cloud service because it might send data to the US.” When I built systems for European clients, we frequently had to pay attention to where user data lives and which legal jurisdictions have access.

For instance, a European healthcare project would insist on all databases being on EU soil, managed by an EU-based provider or at least an EU region of a cloud vendor, to comply with regulations and ease privacy fears. American clients (outside of highly regulated sectors) were generally less concerned about where the data was, as long as it was secure and the service was reliable.

In Europe, this concern is not just theoretical, it’s driven by laws like GDPR and by real geopolitical events. There’s been talk since the Trump era that a U.S. administration could, in a conflict, pressure tech companies to cut off services or hand over data from Europe. European stakeholders took note when, for example, the U.S. Cloud Act came out and when incidents like the U.S. sanctioning an international court official led to his Microsoft email being suspended. So European governments and many businesses have a pragmatic paranoia: they want to avoid being too dependent on foreign tech that could be turned off or used as leverage.

In practice, this means as a developer, I might have to use an EU-based cloud, store encryption keys locally, implement strict data export controls, etc. It sometimes means extra engineering work or using a perhaps less slick tool because it’s open-source or locally hosted.


That said, cloud giants are aware of this pressure and have adapted, e.g., Microsoft, Amazon, and Google now all offer EU regions and even special cloud setups for EU government data. OpenAI recently announced options to process/store data in Europe for European customers. So the gap is narrowing. But in day-to-day development, yes, data residency and control requirements can shape architecture significantly for European projects.

You design with privacy-by-default, you consider where every API’s server is located, and you often build extra audit logging or encryption to ensure compliance. For U.S. projects, you still care about security and privacy, but you’re less likely to get a requirement like “data must never leave our national borders.” In sum, data sovereignty absolutely matters in practice, it influences tech choices and architecture, but it’s a bigger make-or-break factor in Europe due to regulations and recent history. In the U.S., business considerations often outweigh sovereignty concerns, whereas in Europe, compliance and sovereignty often come first, even if it means engineering trade-offs.

Denmark's Digital Minister admitted they might have to roll back if issues arise. Based on your experience migrating systems at companies like Gartner, what are the biggest technical hurdles these governments will face?

Major migrations are notoriously difficult. When I was at Gartner, we migrated a web application from a proprietary SSR framework to Next.js, and even that, a relatively contained project, surfaced all kinds of hidden dependencies and edge cases. Now imagine a government trying to migrate thousands of users from Windows/Office to Linux/LibreOffice, the complexity is off the charts. Some of the biggest technical hurdles I foresee: compatibility and legacy dependencies are number one. Governments have archives of Word and Excel documents with complex macros, custom fonts, or legacy formats and especially in the case of the EU, those are in different languages, including non-Latin-based. Converting those so they still work perfectly in LibreOffice is a huge task.

Many documents might break or lose fidelity (I recall the Danish team noted that complex Office macros or embedded objects might not convert cleanly). Similarly, a lot of government workflows are built around Microsoft ecosystem features, SharePoint sites, Outlook plugins, Active Directory authentication, maybe even Excel-based mini-applications. Replacing or integrating those with open-source alternatives (say, using Nextcloud instead of SharePoint) often means developing custom connectors or doing manual migration of content.


Governments will need strong project governance, pilot programs, and contingency plans (like keeping some Microsoft licenses as backup) to get through it. Denmark’s minister saying they might roll back if needed is just being realistic. You hope not to, but you'd better have a plan B in case, for example, a critical system just isn’t working on the new setup.

You've worked extensively with modern web technologies and cloud platforms. Has the shift to browser-based applications made the underlying OS less relevant, as the article suggests?

I would agree that the operating system matters much less now than it did 15-20 years ago, especially for everyday productivity and enterprise tasks. The browser (and browser-based apps, like VSCode) is the great equaliser. In my own work, I’ve seen that whether someone is on Windows, macOS, or Linux, they can all use the same web applications, and most business software has moved to the web. For example, at one company, we built an internal dashboard as a React web app.

Our team had a mix of Mac and Windows laptops, but it didn’t matter, everyone accessed the tool through Chrome or Firefox and occasionally Safari. As long as you have a modern browser, your experience is virtually the same across OSes. This trend has been so strong that even Microsoft’s Office suite and Google’s productivity apps are fully usable via browser now. From a developer perspective, we often target “Chrome/Firefox” rather than “Windows/macOS” as the platform.

Historically, this was the fear that Microsoft itself had during the browser wars, that a browser could become a sort of meta-platform that diminishes the importance of the underlying OS. In fact, U.S. antitrust findings noted that browsers can act as a middleware layer that “commoditises” the underlying operating system. That’s basically what we’re living through now: much of what an employee or user does is in a web app, so they don’t care if it’s a Linux PC or a Windows PC underneath (some might not even notice the difference beyond the logo at boot time).

This shift has definitely helped initiatives like Europe’s, because one big reason prior government Linux projects struggled was certain thick-client apps or custom Windows-only software. Nowadays, if your email, document editing, CRM, etc., all run in a browser, you just need an OS that can run a modern browser, which Linux can. For instance, when Schleswig-Holstein in Germany moved off Microsoft Office, they chose Open-Xchange (which is basically a web-based email and calendar) to replace Outlook. That kind of approach makes the OS swap easier: users will click a URL for email instead of opening Outlook, it doesn't matter what OS that URL is on.

That said, the OS isn’t completely irrelevant. There are still things like device management, security policies, or specialised peripherals where the OS choice can matter. And heavy-duty creative or engineering applications (Photoshop, CAD tools, etc.) might not have full browser equivalents yet, those still often need a specific OS. But for the majority of knowledge workers and government clerks who mostly use email, browsers, and office docs, the OS is just a vessel now. I’ve certainly felt that personally: I’ve been switching between Windows, Mac and Linux quite a lot in my career, and aside from a short adjustment period, it’s mostly the same experience because all my key tools (VS Code, Slack, Jira, etc.) are either cross-platform or web-based. So yes, the move to browser-centric computing has made the underlying OS far less critical in daily life. That’s a big reason why an idea like “let’s put Linux on government desktops” isn’t as crazy in 2024 as it would have been in 2004.

The article mentions Europe launching OpenEuroLLM to build sovereign AI models. From a developer's perspective, can government-funded projects really compete with the pace of innovation at OpenAI or Google?

I’m really excited about a potential European LLM, whether private or government-backed, but I have to be candid: it’s going to be tough for them to keep up with the likes of OpenAI or Google. The pace and scale at which the tech giants operate are hard to match. Let’s talk about resources first: OpenAI has billions of dollars from Microsoft; Google basically has unlimited money and a decade of AI research head-start. In contrast, the OpenEuroLLM program (Europe’s initiative for open-source multilingual models) has something like a €37 million budget specifically for model-building. To you and me, that’s a lot of money, but in the context of training cutting-edge large language models, it’s a drop in the ocean. Training GPT-4 reportedly costs tens of millions of dollars in compute alone. So the funding gap is enormous.

Another factor is agility. Government-funded projects often involve consortia of universities, companies, committees... which can mean slower decision-making. The OpenEuroLLM is a collaboration of 20+ organisations across Europe. As a developer, I know coordinating a project even across 3-4 teams is hard, imagine 20 organisations, each with its own bureaucracy. Industry folks have already questioned whether a “sprawling consortia of 20+ organisations” can execute with the focus of a single private company. One expert pointed out that Europe’s recent AI successes actually came from small, focused teams, like startups Mistral AI or LightOn, rather than huge multi-partner projects. When a private company like OpenAI wants to try an experiment, they just do it; in a government project, you might need approvals, alignment meetings, compliance checks, etc., which slows things down.

From a developer perspective, I’d love to have more open-source, high-quality models to work with, and I hope these initiatives succeed. But I keep my expectations realistic: OpenAI and Google operate like sprinting cheetahs, while government initiatives are more like marathon runners. They have to run a longer race under more constraints. It’s not impossible to compete, it’s just a very asymmetric matchup. Europe’s best bet might be to leverage its strengths (collaboration across countries, strong academic base, clear ethics guidelines) to produce AI models that are good enough and more trusted, even if they’re not always first-to-market.

You've implemented A/B testing and personalization features that generated millions in revenue at M&S. How does Europe's strict data privacy stance (GDPR, local data storage) impact what developers can actually build?

Answer here:
Europe’s privacy laws definitely make us approach personalisation and testing differently. When I was building personalisation features at M&S, we had to be hyper-aware of GDPR. For example, any A/B testing or recommendation algorithm that used customer data needed to respect consent choices. Under GDPR (and ePrivacy laws), you can’t just freely track users across your site and drop cookies to test new features unless you’ve gotten their informed consent. In practice, this means a lot of the data-heavy personalisation that might be common in the U.S. has to be rethought in the EU. We had to implement robust consent management. If a user opted out of tracking cookies, we had to ensure our A/B testing framework didn’t include that user. That inherently can reduce the size of your test sample or the granularity of data you collect for personalisation.

Local data storage requirements (and data residency clauses) mean that sometimes developers can’t use the quickest solution available. For instance, if there’s an analytics tool that would help to optimise conversions, but it doesn’t store data in Europe or offer a compliant mode, they’d have to find an alternative.

So, as a developer, the strict privacy stance forces you to be more thoughtful and sometimes more creative. It’s not that you can’t do A/B testing or personalization you absolutely can, and I did, yielding big ROI but you do it in a compliant way: get consent upfront with a clear explanation, ensure data is stored in Europe or in compliance with standards, and often design the system such that if a user says “don’t track me,” they still get a decent experience (maybe a generic one, without personalization). It’s a bit more overhead and sometimes a bit less data to work with. The upside is that users who do consent are genuinely okay with your using their data, and that trust factor is higher. But yes, compared to a free-for-all growth hacking culture, the European approach requires more restraint and engineering work on privacy features. In my view, it’s a good thing in the long run, it forces us to build with respect for the user, but it definitely clips the wings of what some marketing or product teams might want to do if there were no rules.

Looking at your experience with both proprietary and open-source technologies, do you think we're heading toward a "multipolar tech world" with regional ecosystems, or will convenience and inertia keep Big Tech dominant?

We’re seeing strong signs of a more multipolar tech world emerging, but I suspect it will coexist with Big Tech’s ongoing dominance, at least in the consumer space. By “multipolar,” I mean different regions forging their own tech stacks and standards (like a European ecosystem, a Chinese ecosystem, etc., separate from the U.S. Big Tech sphere). There’s evidence of this already: China is the clearest example, they built their own ecosystem almost end-to-end (their own social networks, search engines, and even operating systems). In fact, Huawei’s HarmonyOS mobile operating system just recently surpassed iOS in market share within China, which shows how a geopolitically driven tech push can actually change the landscape in a region. So in China, convenience and inertia didn’t keep Apple dominant, a domestically-developed platform took over due to a mix of government push and national preference.

Europe is trying something similar in principle, though using a different approach and without a malign goal (leveraging open-source instead of inventing everything from scratch). We see calls for things like an “EU-Linux” for public institutions. The very fact that European governments are collectively pushing back on American Big Tech, whether it’s through antitrust fines, privacy regulations, or now this move to open-source, suggests the political will is there for a more sovereign tech path. If that momentum continues, we could end up with at least a partial European-centric ecosystem (for example, European cloud services that meet “GAIA-X” standards, European AI models like OpenEuroLLM, etc.). In other words, the pieces of a regional ecosystem are being put in place.

That said, habits and network effects are incredibly powerful. For everyday consumers and even many businesses, it’s hard to beat the convenience of Big Tech’s products. Google, Microsoft, Apple, their tools are popular not just because of inertia but because they’re really good and everyone uses them. Even if Europe has its own options, will people use them if they perceive them as less polished? The path of least resistance is to stick with what you know. A lot of companies might continue using U.S. tech because it’s what their employees are trained on and it just works. For instance, despite all the talk of digital sovereignty, most European companies haven’t dumped Windows, they’ll follow what governments prove out only if it clearly benefits them.

My hunch is that we’ll end up with a hybrid scenario. In sensitive domains (government, critical infrastructure, maybe healthcare), regions will push for their own tech for control reasons, that’s where multipolarity really grows. The general public and global industries, though, might still largely use the big global platforms if those remain superior or more convenient. Think of it like how we have multiple geopolitical power centres, but global trade still ties everyone together. In tech, we might get Chinese apps dominating in China, European open-source stacks in governments, but an average person in London or New York might still be using an iPhone with Google Maps and chatting on WhatsApp. Unless a regional alternative offers a significantly better or safer experience, people tend to stay where their friends and colleagues are.

If you were advising a European startup today, would you recommend building on open-source infrastructure from the start, or is vendor lock-in still worth the convenience?

I would say this: use open-source foundations as much as you can without losing too much developer productivity, but don’t be dogmatic about avoiding all vendor services. For a startup, speed is life. The allure of Big Tech cloud platforms (AWS, Azure, GCP) and their proprietary services is that they let you move super fast. Need a database? In 5 minutes, you have Amazon RDS up. Need authentication? You can plug in Firebase or Auth0 rather than coding it yourself. The convenience is incredible, especially when you have a small team. In the early days, getting to market quickly often matters more than avoiding every bit of lock-in.

However, there are smart ways to balance this. In my experience, you can choose open-source components at the core, and still use cloud hosting or managed services around them. For example, use PostgreSQL (open-source database), but maybe have it managed by a cloud provider for now. Use Kubernetes or Docker to containerise your app so it’s portable, but maybe run it on a managed Kubernetes service. Essentially, design so that if you needed to, you could self-host or move to a different provider later with some effort. This avoids being stuck if costs rise or if sovereignty becomes an issue (especially for a European company that might later need on-prem or EU-only deployment).

One lesson from the whole sovereignty discussion is that vendor lock-in is real and can bite you later with high costs or limited flexibility. Enterprises are learning that the hard way, most will still be with Microsoft or AWS for years. A total pivot isn’t realistic, but they are recognising the downsides of being completely dependent. As a startup, you have a chance to set the tone early. I’d advise picking tech that doesn’t irrevocably tie you to one vendor’s ecosystem if you can help it. For instance, favour open-source databases, frameworks, and languages. Almost every big proprietary cloud service has an open equivalent (or at least a cloud-neutral interface): instead of BigQuery, you could use PostgreSQL or ClickHouse; instead of Azure Cognitive Services for ML, maybe use open-source ML libraries and host models yourself, etc.

That said, there’s a pragmatic side: if using a proprietary service gives you a 10x boost in speed to deliver a critical feature, a startup might choose it and worry about the lock-in later (when hopefully they have more resources). The key is to architect with an escape hatch. Maybe you use AWS’s DynamoDB early on because it’s fully managed and you’re in a hurry, but you abstract your data access so that if you ever needed to swap it out for say MongoDB or PostgreSQL, you could. It’s about avoiding sprinkling proprietary dependencies everywhere in your code in a way that can’t be undone.

In a European context, also consider your customers: if you target government or privacy-conscious users, starting on an open, self-hostable stack could actually be a selling point. For example, if you build a SaaS on purely open-source components, you could even offer on-prem installs (some EU clients like that). It might make some sales easier in Europe if you can say “no American cloud involved”, that’s a unique advantage an open approach gives you in this climate.

So my recommendation: lean open-source for the core (it gives you flexibility, community support, and no licensing chokehold). Use cloud infrastructure to not reinvent the wheel, but try to stay cloud-agnostic in how you use it. Avoid using some esoteric proprietary service if there’s an open alternative that’s nearly as good. And keep an eye on your architecture so you’re not painting yourself into a corner. In short, enjoy the convenience of the cloud, but design as if one day you might need to walk away from your cloud provider, which usually leads to good decisions. This way, you get the best of both: velocity now, and freedom later.

The article draws parallels between this OS shift and previous platform changes like mobile. Having worked on both web and mobile projects, do you see another major platform shift coming that could disrupt current assumptions?

It’s always hard to predict the “next big platform” until it’s upon us, but there are a couple of strong contenders on the horizon. One that I genuinely feel could be transformative is augmented reality (AR) and mixed reality. We’ve been hearing about AR/VR for years, and it’s true that it hasn’t fully broken into the mainstream yet, but with devices like Apple’s Vision Pro coming out, we might be at the start of spatial computing becoming a new platform. If AR glasses or headsets eventually become lightweight and common (like smartphones are today), that could shift a lot of assumptions. The interface paradigms would change from screens to holographic or projected displays in your environment. As a developer who went through the shift from desktop web to mobile, I recall how we had to rethink UI/UX entirely for touchscreens and small displays. An AR shift would be even bigger: we’d have to design for 3D space, gestures in mid-air, and an always-on blending of digital and physical. That could disrupt which companies lead (maybe new AR-native companies rise) and force existing products to adapt heavily. It’s a bit further out, but I see it as a likely platform evolution, essentially the “post-mobile” era could be wearable spatial devices.

Another major shift I see underway is the rise of AI as a platform in itself. We traditionally think of platforms as hardware/OS (PC, mobile, etc.), but think about conversational and generative AI. We already have people using AI assistants (like ChatGPT, voice assistants) as a new way to get information or perform tasks. It’s conceivable that, instead of using a bunch of apps, in the future, a lot of interactions could be through a general AI assistant that orchestrates things for you. From a developer’s perspective, that’s a paradigm shift: you might be building “skills” or plug-ins for an AI platform rather than full-blown apps with a UI. OpenAI’s plugin ecosystem hints at this future. If that takes off, it could upend how software is packaged and delivered. It’s less tied to an OS or device, it’s more about hooking into an AI agent that everyone uses across devices. This could disrupt assumptions like needing to download separate apps or navigate complex UIs, people might just ask an AI to handle things.

Also, to connect with the sovereignty theme, there’s a possibility of a shift towards federated or decentralised platforms. We’ve seen early rumblings with things like blockchain/Web3 (though that had its hype and crash) and federated social networks (like Mastodon or Bluesky). If there’s a movement against centralised big platforms, we could see more decentralisation as a “platform” philosophy. That would change how we build service, maybe more peer-to-peer, more user-controlled data. It’s hard to say if this will become mainstream, but technologically, it’s possible to have a paradigm where no single company owns the platform, much like email is today.

Final question: If this European experiment succeeds and governments prove they can run on open-source software, what does that mean for the future of enterprise software development globally?

If Europe pulls this off, say a few years from now, Denmark, Germany, France, etc., are all running their administrations on open-source OSes and office suites, it’s going to be a powerful proof of concept that could ripple out worldwide. First off, it would show that scale is not a blocker for open-source in client computing. That myth of “open-source can’t handle enterprise desktop needs” would be busted. As a developer, I think we’d see a boost in demand for open-source solutions and skills. Governments would have invested in improving these tools (to meet their needs), and those improvements benefit everyone. Maybe LibreOffice or Linux desktop environments will become dramatically better due to this push, making them more viable for businesses, too.

Enterprises tend to be conservative, but they also pay attention to cost and to what their partners do. If big public sector players prove you can save money and avoid lock-in by going open-source, some companies will follow. We might see more enterprises adopting open document formats and ensuring their software works on Linux, for example, because they’ll need to interoperate with government clients. It could kick off a virtuous cycle: more adoption leads to more vendor support for open formats and cross-platform software, which leads to even more adoption.

One likely outcome is a stronger open-source ecosystem for enterprise. There will be more companies offering commercial support for open-source alternatives (kind of like Red Hat did for Linux servers). We’re already seeing smaller companies offering support for LibreOffice, Nextcloud, etc., but success in Europe could scale that industry. As a dev, that means more career paths that aren’t just “[Vendor X] certified engineer” but rather skills in Linux, LibreOffice macros, open-source security, etc., being highly valued.

For the big software vendors, if I put myself in their shoes, I’d either adapt or double down on what makes my product special. Microsoft, for instance, might improve interoperability (to play nicely with open ecosystems) or emphasise cloud services where they have an edge. Or they might adjust pricing, knowing governments have a credible plan B now. In any case, competition is good. We could see Big Tech adjusting licensing or innovation strategies to “win back” those who are tempted by open-source.

Globally, if Europe demonstrates digital sovereignty successfully, other regions could be inspired to do the same. Maybe Latin American or African countries, etc., might launch similar initiatives to use open-source for their governments or schools. This could gradually lead to a more diverse global tech environment rather than one-size-fits-all from Silicon Valley. For developers globally, it means you might increasingly write software for multiple ecosystems. For example, you’d ensure your app runs not just on Windows/Mac but also on a Linux-based government platform. Web developers might be asked to target open-source browsers or follow stricter data localisation rules to sell into certain markets. Essentially, it broadens the requirements we consider when building software.

From a high-level viewpoint, it could mark a shift in how people view software procurement. Instead of “nobody gets fired for buying Microsoft/Oracle/etc,” governments (and maybe some enterprises) might say “we prefer open-source by default unless there’s a strong reason otherwise.” Some already have these policies in nascent forms. If it succeeds, that mindset could grow. It doesn’t mean proprietary software dies, but it would have to justify itself more clearly in the face of viable open alternatives.

Finally, success would reinforce the idea that open collaboration can produce enterprise-grade software. As a developer, that’s heartening. It means the code I contribute to an open project might end up in mission-critical use in a ministry or school system. It could attract more developers to contribute to open-source, knowing it has a big impact. In the long run, that raises the bar for everyone. We might see a future where public infrastructure software is largely open (much like how web servers and programming languages are today), and companies compete on service, support, and cloud offerings on top of that.

In summary, a successful European open-source transition could rebalance the software world: making open solutions a first-class citizen in enterprise and government, spurring innovation and competition. It signals to CIOs everywhere that they have a choice. I don’t expect Fortune 500s to all suddenly drop their Microsoft licenses, but they might use the threat of doing so as leverage, or start incorporating more open-source to reduce costs. Even Microsoft might pivot (they already embrace open-source more than before). As developers, we’d have a richer set of platforms and tools to work with, and hopefully a more open, interoperable stack to build on. It’s like the next chapter in tech, not a complete overthrow of Big Tech, but a significant broadening of options. And for us building the software, that’s an exciting prospect because it means more innovation, less monoculture, and new problems to solve in integrating these ecosystems.