
AI's Not the Future? Part II: Data & Privacy Concerns (2025)
For those of you who are avid readers. Firstly, Thank you. By now, I am certain that you understand my approach and style to more robust topics; Circuitous, layered, and paralleled in approach. If you'll allow me, the present topic at bar is likewise complex and multi-layered as well as multi-faceted; I will present some structural pieces then tie them together once the pieces are properly laid. BE WARNED, it may yet take some time to get there. But, I do believe the analysis is worthwhile.
And, if you missed the previous Blog, please visit here:
Meta-Analysis: AI's Impact on the U.S. Economy (2020 – 2026) and beyond [Small businesses. WHAT NOW?]
This being 3-parts: please take your time to walk through the material, or binge-read. Player's Choice🙌
If you missed PARTI: 🤖A.I. is NOT the Future😱: Let me explain [ 3-part Treatise] PART I *ONE
PART II: AI's Promise vs. REALITY
With "YOU are the product" model working so splendidly, all that big enterprise has to do is, speed up that data ACQUISITION and DISTRIBUTION and essentially, they print money right? Not so fast. However, these companies' torrential rush to fund, build, and spread their own AI models may provide a clue. Before we dive into the clue(s), let us first examine what they have been doing since 2022 (OpenAI/ChatGPT's global debut).
The current trajectory of AI development, especially with companies like Oracle investing heavily in data centers and AI as a platform, is capital-intensive and raises questions about sustainability and user tolerance for existing business models.
Key Issues with AI Infrastructure and Business Models
Infrastructure and Capital Expenditure:
Building AI data centers demands enormous upfront investments, often in the billions per facility, covering specialized hardware like GPUs and TPUs, robust energy systems, and advanced cooling to manage heat from intensive workloads.
For context, a single hyperscale AI data center can cost $7 billion or more, driven by hardware alone, with total global data center investments projected to hit $7 trillion by 2030 across the value chain (including real estate, power infrastructure, and chips).
Oracle's shift to AI-centric cloud services exemplifies this: in FY2025 (ended May 31, 2025), the company spent $21.21 billion on capital expenditures (CapEx), a 37% share of its $57.4 billion total revenue—up from 13% the prior year—primarily for OCI GPU clusters and data center expansions. This marked Oracle's highest-ever CapEx, flipping free cash flow negative at -$0.39 billion for the first time since 1992. Looking ahead, Oracle projects FY2026 CapEx at $35 billion (potentially higher), focused on revenue-generating equipment like NVIDIA Blackwell GPUs for AI superclusters capable of scaling to 131,072 units. These are long-term commitments with ROI timelines stretching 5–10 years, vulnerable if AI adoption lags or lease-out is incomplete (low interest & low customer turnout).
For instance, Oracle's Q1 FY2026 earnings (ended August 31, 2025) showed cloud infrastructure revenue up 55% year-over-year to $3.3 billion, but total CapEx for the quarter hit $8.5 billion, underscoring the front-loaded risk. Source
Already, there are some concerns with spending outpacing earnings. With so many competitors in the AI space already (folks, it has ONLY BEEN 3 years in NOVEMBER, 2025) that AI debuted to the world and already the competition is at break-neck pace!
Ongoing maintenance and upgrades compound costs, as AI hardware obsolesces rapidly—NVIDIA's GPU cycles average 2–4 years, with the A100 (launched 2020) reaching end-of-life in February 2024, just four years later, halting software support and forcing replacements for newer models like Hopper or Blackwell. Energy demands are equally staggering: AI data centers could consume 945 terawatt-hours (TWh) globally by 2030 (double 2022 levels), equivalent to Japan's total electricity use, with U.S. data centers alone accounting for nearly half of national demand growth. This drives operational expenses up 267% in wholesale electricity costs near U.S. data centers since 2020, plus environmental strain—each kWh requires ~2 liters of water for cooling, potentially adding $37.50 monthly to residential bills in high-impact states like Virginia. For Oracle, this manifests in hyperscale builds like its 2GW leased capacity spree (costing ~$3 billion annually in expenses), amplifying both carbon emissions and upkeep burdens. Source

Borrowing and Financial Risk
To fuel these expansions, Oracle has aggressively tapped debt markets, with total long-term debt reaching $104.10 billion by FY2025 end (debt-to-equity ratio of 5.09x, far exceeding peers like Microsoft's 33%). In September 2025, it issued $18 billion in investment-grade bonds—the year's second-largest sale—to fund AI data center buildouts, including GPU procurements and regional hubs. This follows a $38 billion debt package for Texas and Wisconsin campuses tied to Oracle. The crown jewel is a $300 billion, five-year cloud compute deal with OpenAI (part of the "Stargate" initiative), requiring Oracle to borrow an estimated $100 billion over four years (~$25 billion annually) for 4.5GW of U.S. capacity—starting payments in 2027. Q1 FY2026 financials (from Oracle's September 9, 2025 earnings release) reflect this strain: operating cash flow hit $8.1 billion, but free cash flow plunged to -$362 million amid $8.5 billion CapEx, with remaining performance obligations (RPO) surging 359% to $455 billion—much tied to AI contracts with OpenAI, xAI, Meta, NVIDIA, and AMD. Source
If AI revenue falters, balance sheets could buckle—Moody's flagged Oracle's "high leverage and negative cash flow" in July 2025, citing exponential growth risks despite "tremendous potential." Competition intensifies: Big Tech's collective $320 billion CapEx in 2025 (Amazon, Meta, Microsoft, Google) risks oversupply, with hyperscalers like AWS and Azure vying for the same GPU-constrained market, potentially sparking price wars that erode margins (Oracle's non-GAAP operating margin held at ~40% in Q1 FY2026 but faces pressure). Source

Leasing and Efficiency Assumptions
Leasing models assume near-100% utilization to be profitable, but inefficiencies creep in—hardware downtime, underutilized servers, or rapid obsolescence. AI workloads are also unpredictable, making it hard to optimize resource allocation.
In early days of all technologies, yes ALL, inefficiency is a hallmark of stage or progress. Computers used to size of rooms and buildings. Printers used to be small or medium sized rooms. Ovens and Microwaves used to be industrial bricks (half or more of the size of a dishwasher today) that could probably hold nuclear material on the inside without any harm to the user on the outside. At this stage of AI development, we are 3 years into this buildout and expansion; we are still in the dishwasher-sized microwaves.
Upkeep costs (repairs, upgrades, cooling) add to the burden, and older equipment may not support newer AI models, forcing premature replacements. Again, inefficiencies abound.
Leasing AI capacity hinges on 90–100% utilization for profitability, but real-world factors like hardware downtime (GPUs fail at higher rates due to memory/driver issues), underused servers during non-peak inference (20–30% load), and 2–5 year obsolescence erode economics—potentially stranding assets if workloads shift. North American vacancy rates are at historic lows (3.16%, with just 285MW available), driving record leasing (e.g., 500–800MW absorbed quarterly), but AI-specific demand creates a "tsunami" mismatch—new capacity is pre-leased, yet profitability requires sustained high utilization amid unpredictable bursts (training at 100% for weeks, inference at lower rates). For Oracle, this means betting on RPO conversion: its $455 billion backlog (Q1 FY2026) assumes efficient allocation, but upkeep (repairs, cooling, upgrades) adds 10–20% to opex, and legacy gear like A100s can't run cutting-edge models, mandating premature swaps. Broader models show marginal profitability at 80%+ utilization and premium pricing ($25B+ annual CapEx needs 10x revenue growth to break even), but delays in AI ROI (e.g., OpenAI's projected cash-flow positivity only by 2030) heighten risks. Source

The “We Are the Product” Model: IS IT FAILING?
In referencing the previous post: the current dominant internet business model—monetizing user data, behaviors, and sentiments—has fueled tech giants like Google and Meta for two decades. This model relies on:
Harvesting user data (search history, social interactions, etc.) for targeted advertising.
Offering “free” services in exchange for privacy, which users increasingly find invasive and exploitative.
AI amplifies this tension. Large language models and other AI systems thrive on vast datasets, often scraped from public or user-generated content. This raises ethical and legal questions, as seen in lawsuits against companies like OpenAI for using copyrighted material or personal data without consent.
Shifting User Sentiment
The tradeoff between convenience and privacy is wearing thin. Users are growing wary of:
Data exploitation: Constant tracking, profiling, and data breaches erode trust.
Lack of control: Users have little say over how their data is used or sold.
AI’s opacity: Black-box algorithms make it unclear how decisions (e.g., recommendations, content moderation) are made.
Surveys (e.g., Pew Research, 2023) show rising consumer demand for privacy-focused services. Regulations like GDPR and CCPA reflect this, though enforcement lags. People are tired of being the product, and this is pushing demand for alternatives. While customers and the public are already wary of current data practices and privacy; AI does not seem to be making this better. In fact, accelerating this trend.
The question is: will the current
Free > Scrape your data > throw advertisers at you model continue to function?
Emerging Economic Models
The “people as the product” model is faltering, and AI’s rise is forcing a rethink. Potential new models include:
Subscription-Based Services:
Platforms like X Premium or SuperGrok (Oracle’s parent company xAI’s subscription plan) offer ad-free experiences or enhanced AI features for a fee. This aligns with users wanting value without data exploitation.
Challenge: Subscription fatigue. Users are overwhelmed by paying for multiple services (Netflix, Spotify, etc.), and AI platforms must justify their cost.
Freemium with Paid Trials:
Offering limited free access with mandatory payment details for trials is common but risks alienating users who distrust auto-billing or feel pressured.
Example: Grok 3’s free tier on x.com or grok.com has usage quotas, with higher tiers requiring payment, but this can feel like a bait-and-switch if not transparent.
Decentralized and Privacy-First Models:
Emerging solutions like decentralized AI (e.g., blockchain-based data marketplaces) let users control and monetize their own data. Projects like Ocean Protocol aim to give users sovereignty over their data.
Challenge: Scalability and user adoption. These models are nascent and complex for mainstream users.
Usage-Based or Pay-Per-Use:
Instead of subscriptions, some AI services might charge based on compute usage or API calls (e.g., xAI’s API at x.ai/api). This could appeal to businesses but may not suit individual users.
Challenge: Pricing complexity can deter adoption.
Where We’re Headed
We perhaps are at a fork in the path of a new economic model. The “people as the product” approach is losing viability as users demand transparency, control, and value on the one hand. On the other hand, the indominable hunger of market excess and shareholder returns pushes in the exact opposite direction. Quite literally, an Unstoppable force meets an immovable object. AI’s data hunger makes this shift urgent. What's to come?
Bold statement you say? World Parliaments and Congress are not just sitting idly by while data is ripped away from the public (or are they?). Here is the rub; though they may believe they are regulating the industries with new laws, regulation, and compliance, let's be real for one second. Many of these legislative members are far past the half century mark, and have no idea of the technicals, terminology, baseline infrastructure, data architecture, etc. . . it is quite literally FAR OUTSIDE their reach. And, even assuming they do a great job at understanding the entire techno landscape, the regulations that come later will serve merely to burden the existing model, hardly touching the new AI infrastructure or its new modes of operation.
WHY? AI can take 500, 5,000, and even millions-pages-long regulations:
deconstruct ;
generate graphics ;
create ops/flow charts ;
find the gaps ;
and begin exploiting those gaps
all within the time of an executive team merely planning to meet to discuss. Yes, THAT FAST.
Here are some possibilities:
Hybrid Models: Combining subscriptions with optional data-sharing for discounts. For example, users might opt in to limited data collection for lower fees.
Regulatory Push: Stricter laws could force companies to prioritize user consent and data minimization, reshaping AI training practices.
Ethical AI Brands: Companies that prioritize privacy and transparency (e.g., open-source AI or audited data practices) could gain market share.
User-Owned Ecosystems: Cooperatives or platforms where users co-own data and profits could emerge, though this is speculative and far off.
Oracle’s Role and Challenges
Oracle’s AI and data center push is ambitious but faces hurdles:
Competition: They’re up against AWS, Google Cloud, and Microsoft Azure, which have deeper pockets and established ecosystems.
User Trust: Oracle must convince users its AI services (e.g., cloud-based generative AI) respect privacy, especially given the data-intensive nature of AI.
Financial Strain: Heavy borrowing could backfire if AI adoption slows or if economic conditions worsen (e.g., higher interest rates), and the big fear is that Oracle is still growing (slowly) so long as the AI datacenter buildouts are still ongoing. What happens when construction completes, and the doors are open but . . . crickets?
Conclusion
Now it would appear that I am "picking" on Oracle. While there may be slight truths to that sentiment, we here at EthicalAI are interested and try to abide by the data alone. So no, the efforts intertwined throughout the endeavor to demystify, component-synthetize, and parse-rectify the multi-threaded and multi-party assault on "AI," we try to really get the core of the issues and highlight facets or pathways that are less obvious.
And, to effect change or move the industry in any meaningful direction SCALE is the key. Oracle just so happens to organize its capital, energies, and activities around this time with its budding business model, herculean capital expenditures, and attendant press releases. As such a "new" player in the AI space, Oracle serves as a good backdrop against which we may compare/contrast with already established megalithic tech companies presently embedded (Meta, Google, Microsoft) in the Zeitgeist and marketplace.
The AI boom is exposing cracks in the “we are the product” model. Users are fed up with privacy tradeoffs, and companies like Oracle, betting big on AI infrastructure with seeming reckless abandon of all else, must navigate capital risks and user backlash. Subscription and pay-per-use models are gaining traction, but they’re not a panacea—users want control and fairness. We’re likely headed toward a hybrid model where privacy, transparency, and user choice take center stage, driven by both market demand and regulation. The next 3-5 years will be pivotal in defining this new economic paradigm.
For readers who are displeased with this conclusion as "cop out," I understand completely. Please consider the following: Humans evolved from hunting & gathering as a sustenance model toward small community/ farming economic models. Then with city-state and nation-state developments ("society" at large), toward nation-to-nation treaties, trade ties, and alliances [Bargain, Trade, and Barter model]. Artisan / Patron model as well as the enlightenment fueled innovation, arts, and the golden age globally mated to colonialism and state-monopolized thrash & burn models of dominance; leading ultimately to the industrial revolution [Hub & Spoke / globalization product supply chain model(s)]. The world's economies have ebbed and flowed between interconnected, interdependent, to discord, chaos, and co-dependence with some hard feelings in tow.
The point however, remains the same: we as a society and collection of societies had models for these interchanges, exchange rates, markets, trade routes, pliable relationships, people, places, and histories among one another which to share. These models both involved all parties (and ripple effects to non-parties) as well as government, people, industries, resources, and all the attendant messiness.
In 2026's world and beyond, this may yet be the 1st time where
a technology (AI) can develop, grow, and sublimate with or without the rest of society;
AND, can develop, grow, and sublimate maybe entirely independent of everyone else.
A wild person who throws off the chains of society and the oppression of the ancestors can abscond into the woods or mountains and live/die at their own whims and desires. Broadly, this 1 person does not affect the function, success/failure, or futures & outcomes of society. YET, AI, developed in the light of day or in secret may stay dormant but it may also emerge years later with the critical capabilities to interact, disrupt, change, enhance, or hurt the rest of society. So when I say that we don't know: WE TRULY, DO NOT KNOW.
While economists and theorists clammer to hitch economic models, funding schemes, capital expenditure projections, and zero-sum power charts to plot out that trajectory the fact remains: CURRENTLY, WE HAVE NO ECONOMIC MODEL FOR HOW OR WHAT AI IS SUPPOSED TO DO.
If a cop out, tis' merely a partial cop out. But point taken.
More importantly though, before you hitch a ride on the magic hopium or cop a feel on the hypium rally:
Please. Have a think about where you fit in to YOUR future. What do you want YOUR future to look like. AI or NOT, the organic, eating, sleeping, and working part of the equation remains.
In case you are interested specifically in economic models, here are a few that could scale out and potentially replace our current product/globalization model:
Decentralized Digital Barter Model: blockchain enabled digital regions / micro communities with focus on peer-to-peer transaction level volume and exchange. Web3, distrust of centralization, and the technical interconnectedness forging forward as drivers.
Circular Economic Model: Micro-level and decoupling reign for this model where one can truly only count on local microeconomies to survive. Climate change, inability to travel, or technology/political structures break down. This model is NOT ideal, but has its charms. Not a feasible big-tech model because success exemplified only on the local level, rather than organizationally or globally.
AI-enhanced/orchestrated Supply Chain Model: We all either through complacency, force, coercion, or necessity relegate the inner workings of supply chain, logistics, products, materials pairing, manufacturing, and other industrial processes solely to AI: Leave it to Beaver model if you will. Top tier efficiency, but this model leaves the human components as an afterthought.
Virtual / Metaverse Economy: Maybe the most dystopian of all ? Essentially the digital economy, digital real estate, virtual goods, virtual services, etc. . . all exist independently of the physical world. Valuations, prices, goods/service/currency exchanged are platformed while the physical world is left to its own devices. Not inherently dystopian or undesirable on its own. However, if we shift into this virtual economy in a large way, perhaps it signals societal decoupling and civility collapsing or having collapsed. Hence, it may be safer and more enjoyable to digitally experience life.
Again, YOU is not spelled or identified with any component or capacity of AI. Please consider focusing on the YOU part of the equation.
