The digital ID is a key enabler of the surveillance, knowing what everyone is doing at the transactional level and being able to tweak that micromanagement based on a person’s activity because the digital ID isn’t just limited to the financial system, right?

Behind digital ID is that everyone needs legal ID because otherwise they’re unable to access essential services, right?

One of the things too about the natural asset thing, at least as far as the natural asset corporation model is concerned is that you literally just go into a forest or you go to a river or lake and you identify the natural asset and then at no cost to you.

Just after identifying it, you issue shares in the natural asset uh a lake, a forest, whatever. And then you sell those shares uh to sovereign wealth funds, asset managers, whatever. and then you have an IPO and generate all this money.
I mean, it’s literally just pointing out something outside and being like, “This is mine. I’m gonna fractionalize it and sell it to people.” And you’re producing money.

Infrastructure projects:

Then of course a lot of what’s happening now as we’re moving into this new financial governance system is the push to change uh all of the infrastructure towards this you know quote unquote green a green model. I guess or decarbonization right which is of course interface with the global carbon market but not necessarily so like it doesn’t have to be but the push is obviously to create a bunch of new infrastructure all over the world and Blackrock um is positioning themselves to be one of the key players in that space they acquired.
Pictures and not in the original video. For the original video click on the in at the start of the site please.
No place to hide/end of freedom/total control.
Push, push and push.
Creating extinctual treats to all of and offering fake solutions to the problems/set-ups created now world wide.
————————————————————————————
29 sep 2025
Whitney Webb is an investigative journalist known for her in-depth reporting on government corruption, corporate power, and intelligence networks. Through her writing and public talks, she exposes hidden systems of control at the intersection of politics, finance, and technology.
If you found this video valuable, share it with a friend! Subscribe for more insights on history, politics, culture, and the lessons of the past that shape our future.
☀︎ Contact me →
X: / motivationcore_
Pinterest: / motivationcore_official
Blogger: https://motivationcoreofficial.blogsp…
☀︎ Subscribe to Bussines Core → / @b_core
All content in this video is either fully licensed for commercial use or falls under fair use for educational purposes. The video has been fully edited by our team. Please note, this channel does not provide financial advice. Everyone’s journey is unique, and there are no guarantees of future results or success.
Thank you for watching!
They have essentially said that this is the only way this will be the only way to prove you have legal identity and so if you don’t participate in that system as far as the state or the you know the private sector is concerned you don’t exist.
So by not participating in that system, you’re inherently excluded from the economic system and really essentially everything.

You know, it’s going to be a problem reaction solution type of situation where they’ve already made the solution.
They’ve already developed what they want to be uh the new financial governance system after this new Breton Woods moment.

They just need some sort of big event on the scale of World War II or some large event that’s, you know, equally disruptive in order to be like, “All right, now it’s time for a new financial governance system.”

In in this context, I think the digital ID is a key enabler of the surveillance, knowing what everyone is doing at the transactional level and being able to tweak that micromanagement based on a person’s activity because the digital ID isn’t just limited to the financial system, right?
It’s like your travel uh your health history, your career history, your education credentials, your access to telecommunication, social media, the internet.
You know with the new AI era right they can fuse all that data analyze it you know and depending on how they develop that AI algorithm use it to uh control people really in unprecedented ways.

I think the digital ID and the CBDC and its private sector equivalence project is something that were always sort of intended to be the same system so there’s documents from the UN from the BIS and related groups that are sort of been work yeah
that have been working on this for years um that essentially frame one as essential to the other.
Uh using words about, you know, this is inclusionary. Um sort of, you know, the
whole I guess marketing behind ( To Force everyone to accept added) needs digital ID is that everyone needs legal ID because otherwise they’re unable to access essential services, right?

And so the idea is we all have to be included in the system. And they directly link that to the concept of financial inclusion and banking the unbanked, which he brought up earlier.
But inherently these systems actually function in in an exclusionary way based on how they’ve been set up. ( Force)
You know they have essentially said that this is the only way this will be the only way to prove you have legal identity.

And so if you don’t participate in that system as far as the state or the you know the private sector is concerned you don’t exist.
So um by not participating in that system you’re inherently excluded from the economic system and really essentially everything. Um so you have to onboard to the surveillance state or be excluded from everything.
So it’s, you know, being marketed as inclusion, but it’s really inherently exclusionary.
How does this system get triggered?
How do how does how do we move into the Mark Carney ism?
Well, I think they sort of give it away when they say that this is the new Bretton Woods movement that needs to be seized.
So Breton Woods was uh what came out of World War II essentially and was the creation of a new financial governance system after World War II.

And this is essentially an effort to create a new financial governance system that was announced well before any sort of crisis like that.
But it’s probably going to need a crisis of that level uh to be implemented and to convince people to on board at scale.

And if you subscribe to the theory that all wars are banker wars, which there is um
plenty of evidence to support that, I would say that seems to suggest that perhaps uh you know the this is the pre you know it’s going to be a problem reaction solution type of situation where they’ve already made the solution.
They’ve already developed what they want to be uh the new financial governance system after this new Breton Woods moment. They just need some sort of big event on the scale of World War II or some large event that’s, you know, equally disruptive in order to be like, “All right, now it’s time for a new financial governance system after this big event like they did after World War II.”

Well, it seems like in Larry Frink’s case in particular, there’s um a goal, well, a big am ambition there to develop new asset classes that can be used to basically fuel their existing business model and perpetuate it for like I don’t know millennia forward.
So, one of these um that uh you know I wrote I’ve been writing about for a few years is the whole idea of natural assets, what they call nature’s economy.

And um one of the groups that has been u sort of propelling this forward at least one of the earlier groups the intrinsic exchange group which is the a product of the Rockefeller Foundation and the multilateral uh development banking system is you know has a graphic on their website called the opportunity and they show there the existing amount of assets in in the world economy and then show what it um if we unlock natural assets um how nature’s economy you know it’s like six times amount of existing assets in the economy today.
And so as an asset manager, you know, Black Rock having being able to unlock and take control of as many natural assets as possible that aren’t currently part of the financial system is obviously um a way for them to perpetuate what they do and deepen and expand their control over uh not just um you know people and the existing financial system, but really over the natural world as well.
Essentially turn everything alive into a tradable Wall Street financial product.
And uh the goal as Fink has stated is to have all of this on a on a universal
ledger on blockchain presumably um and have it be you know trackable and surveil able which is uh interesting if you look at it through the context of risk management which is something that Larry Frink is very open as having been one of the guiding lights of his whole career and so by having it all surveil able and you know automated in a sense you know he’s able to have his risk management AI thing you know Aladdin you know sort of exercise control over it in in unprecedented ways.
I think uh for their benefit.
Then of course a lot of what’s happening now as we’re moving into this new financial governance system is the push to change uh all of the infrastructure towards this you know quote unquote green a green model. I guess or decarbonization right which is of course interface with the global carbon market but not necessarily so like it doesn’t have to be but the push is obviously to create a bunch of new infrastructure all over the world and black rock um is positioning themselves to be one of the key players in that space they acquired.
I think they’re called gip one of the biggest infrastructure uh you know developers in the world and um I think one of their top guys at Davos just a few months ago was talking about how they’re betting really big on infrastructure going forward and it’s going to be one of the biggest uh investment opportunities of the next several decades actually.

So I think they’re um quite ambitious and you know if people aren’t aware of this I’m sure they’ll get away with it but we’ll see.
One of the things too about the natural asset thing, at least as far as the natural asset corporation model is concerned is that you literally just go into a forest or you go to a river or lake and you identify the natural asset and then at no cost to you.
Just after identifying it, you issue shares in the natural asset uh a lake, a forest, whatever. And then you sell those shares uh to sovereign wealth funds, asset managers, whatever. and then you have an IPO and generate all this money.

I mean, it’s literally just pointing out something outside and being like, “This is mine. I’m gonna fractionalize it and sell it to people.” And you’re producing money.
Like, you know, out of thin air essentially. And you’re able to do that. I mean, the natural world is vast and huge and they’re doing this. They’re financializing it all as framing it as the only way to save the planet, but really it’s the only way for them to save their insane debt racket.
Open AI and Oracle,
for example, in this recent announcement. Uh those are very big companies that are likely to only get um bigger should there be a more decentralized landscape in these industries that are so important to foster competition or are we getting close to a quasi, you know, uh monopoly model in certain fields? um of big tech.
And I think uh that could be potentially problematic and something uh to look at, especially when you consider that there have been, you know, statements made by members of the quote unquote PayPal mafia that a free market competition is for losers and that companies should uh find a niche and corner that market and essentially build a monopoly or quasi monopoly.
Some of those people um have done that in direct collaboration with uh the national security state as is the case of Palantir uh for example which was you know created with CIA and INCEL’s uh direct involvement um and has very significant CIA connections from its origins to now.
So if you’re building a quasi monopoly and you don’t like free market competition and you’re going to partner um you know with the most you know most powerful and unaccountable parts of the state in that way you know what does that ultimately mean and is this you know going to be the free market utopia that we’re being sold or you know is something else going to happen?
What was your initial reaction to the announcement of the $500 billion that are being invested into American AI infrastructure?
Uh yeah, so unfortunately I wasn’t able to pay uh super close attention to the press
conference that involved of course Sam Alman and Larry Ellison and myoshi son
of SoftBank.
Uh but essentially it’s not really surprising to me that there’s being a big push into AI infrastructure.
And this is something that’s really been what I would argue a bipartisan push um that’s been going on for some time starting with really the National Security Commission on artificial intelligence uh that was led by Eric Schmidt the former Google CEO who even though he’s a Democratic donor has been very active in this space and sort of has the same view um propagated by more Republican leaning uh big tech figures uh that the US needs to out compete China in AI specifically and some of the National Security Commission on AI um comments and presentations they gave that have since been released uh through Freedom of Information Act requests.
Essentially uh suggests that the road map to doing so um is to impose um lots of domestic surveillance and AI powered surveillance technology um on the American public as a way to sort of leapfrog China.
Because they argue that China has already done that and thus has a much larger user base and data set for all of these different AI technologies.
And so in order to um outdo them, we must uh drastically increase our user base of these AI technologies which includes things uh they say like you know um surveillance as I mentioned earlier, facial recognition, expanding that um expanding e-commerce, reducing in-person shopping and really you know uh other in-person activities including healthcare in-person doctor visits moving to tele medicine as much as possible and this was really um a big focus of that commission.

Which was the meeting of the national security state and the big tech industry um in the years prior to co 19 and a lot of that push into the digital realm uh was a was a consequence of a lot of co 19 um era policies that’s helped sort of feed um that ambition.
I think with this goal to have an increased um you know investment in AI infrastructure uh we’re likely to see that develop and it’s not really surprising that you have Larry Ellison of Oracle involved um he’s recently had comments about the need for things like digital ID um in the United States
which would be sort of a core foundational aspect of this type of uh wide-reaching uh AI system and also said that um you know basically I’m paraphrasing here of course but invasive AI surveillance systems on citizens will help them be on their best behavior uh which is a rather Orwellian uh thing to say and sort of very much feeds into the um idea of the panopticon where if someone is always being watched the panop Opticon, you know, uh is used in the context of surveillance a lot, but it was originally a model uh for a prison uh where prisoners are always being uh or under the impression they’re always being watched, but they can’t directly see the people watching them.

That this sort of induces obedience to be on their best behavior at all times.
And I um you know personally view that as kind of um a very big and not necessarily positive trade-off between liberty and at least perceived security.
I think uh people would do well to sort of um interrogate some of the consequences of handing such a big AI infrastructure project uh to companies that are run by
uh individuals with those views in particular.
There’s two things I want to drill into.
I’ve heard you say panopticon before, but I actually don’t know what that means. What does that mean?
So as far as I’m aware, um, it was originally a design for a prison, as I mentioned a second ago.
I believe it originated in Britain. And basically the idea was to sort of have a somewhat circular prison with the cells on the border and there’s a long uh, you know, sort of obeliskesque tower in the middle um, where uh, the guards are and the windows are either tinted or constructed in such a way that the prisoners can’t see the guards watching them.

The impression is created that they are always under watch so as to induce obedience because they’re, you know, they could be being looked at all the time, but they don’t really know. Um, so as far as the origin of the term, okay, it’s a standing for eternal surveillance.

Yeah. But as I said originally it was a prison thing but you know since then it’s sort of been uh adapted to sort of refer to you know the same idea when applied to mass surveillance in the in the digital era because obviously when this was developed. I forget if it was like the 18th or 19th century obviously uh digital technology as it exists today uh wasn’t around back then but I think the idea persists and can arguably be scaled uh to really to levels that the original designers of the panicum concept uh probably couldn’t even fathom back at the time.
Okay. So, uh then uh I just breaking it down. Pan all over optic presumably uh
site. So, um omniscient site basically or omnipotent site.
Okay, cool. Totally understand. Now, I have a feeling that as we talk today that underlying the conversation that we’re about to have is this idea of liberty versus security and the trade-off there.
If you were going to um plant a flag in your own life and give us a breakdown of how you see that trade-off, is it 100% liberty and I don’t care what that means for security or is it some kind of balance?

Well, it’s hard. I would rather start with a paraphrase of a rather well-known quote of Benjamin Franklin’s which goes something along the lines of uh those who trade their liberty for security.
You know ultimately have neither of those things and I think ultimately um what my views on that would be in an ideal world are different from what my views are on it as uh as it relates to uh the world we live in today.
I think there’s a pretty um clear indication particularly over the past 20 plus years, the post 911 era, uh that that that you know great surveillance powers at least in the hands of the United States and most other governments in the world frankly um are routinely abused uh for the purpose of silencing dissent um among other things that are not positive for liberty.

That you could argue that there are situations where insecurity is engineered in the public in order to uh facilitate people giving up their liberty or civil liberties in particular uh for what they perceive as increased security or they’re told as increased security.
I don’t really think that’s necessarily always um the case at the end of the day. And I think a lot of uh the mass surveillance uh system as it was designed in the immediate post 911 era uh was meant to uh rely a lot on sort of predictive paradigms.
So what do I mean by that?
Essentially things like what are now called predictive policing or predictive health. Um you know sort of the idea of uh you know predictive policing I think is a rather uh polite way to say pre-prime.
The idea that we can stop um you know criminal activity or a terrorist attack before it happens uh by looking at certain uh data flows for example and the same uh was also posited in the post 911 era for stopping bioterror attacks before they happen and natural pandemics before they happen uh which is also kind of a sticky issue um as well because some of those proposals and their um sort of resurrections in the post uh COVID era.

I think are a little uh dubious in in a sense because um they sort of posit uh creating interventions, health interventions uh before symptoms even show up in a particular population based on uh certain markers identified in AI and sewage water for example and things like that.
So, you’re really putting a lot of um a lot of trust um in in algorithms and sometimes those algorithms are not as accurate as the company claims and as far as I know um a lot of those claims are not independently vetted.
Um so ultimately it could um you know lead to some unfortunate consequences.
I think particularly if you’re trying to prove in a pre-rime scenario that you weren’t going to commit the crime.
You’re basically it’s you versus the algorithm and who do you think um you know the position of authority is likely to side with in those scenarios.
You know, I think it’s it kind of kind of creates sticky situations All right.
That feels like a uh a comment around the reality of this stuff.
You said that there’s a discrepancy between how you’d feel about it in an ideal world versus the world we actually live in.
So the concerns around would this be um one can it be trusted is certainly a big question.
Two are there uh liberty um constraints that would become so problematic as to the cure being worse than the disease.
So those are the things that it sounds like you run into when you look at the
reality of this stuff being deployed.
Talk to me though about the ideal world.
Is this stuff that you wish that we could deploy, but you just know in the hands of humans, we can’t trust it?
Or even in an ideal world, um, your take would be very different.
Well, I think in an ideal world, this type of stuff to tackle crime and illness wouldn’t really be necessary, quite frankly.
But I guess it depends on what your definition of the ideal world is.
I guess sort of in the scenario, is the government trustworthy?

I guess in that scenario, if it were provably trustworthy and we didn’t have all these national security scandals of uh grave severity.
In many cases going back decades and decades and decades and no accountability for national security overreach over that time period as well.
It would be easier perhaps to trust. But you know um I don’t know I personally I’m like a kind of a voluntary voluntarist like I just kind of prefer that people uh I don’t like to make decisions for other people necessarily and I tend to sort of be a liberty maximalist.
I think that sort of you know begets better outcomes in general.
I don’t know again it depends on what you sort of define as this as this ideal world. Um and since a lot of my focus uh you know as a as a writer and researcher um has sort of been on the national security state you know I uh tend to think it’s not appropriate to give them certainly not more power than they already have.

I think there needs to be accountability um for what you know the overreaches that we know have happened in investigations and the ones we probably don’t know about. I I really don’t foresee there really being any transparency or change there uh even with the incoming administration.
Okay. So it would the following assessment obviously oversimplified but just so I know if I’m on the right track with the way that you view the world.
Is your take on the way the government works is they will spy on their citizens to gain control over their citizens period.
I think that is a accurate representation of your base assumption. Is that true?

I think it’s close. I would also add that ultimately you have sort of you know I sort of see the government as a public private partnership.
The public sector because of political donations among other things is always very in tune to the concerns of the private sector and a lot of the most powerful
players in the private sector and in the financial sector are very interested in sort of de-risking the world as much as possible and a lot of intelligence agencies um sort of share that view.

So I think a lot of this surveillance and this pred these predictive paradigms and all of that a lot of it sort of comes down from their perspective.
I think to sort of de-risking um the society uh and in markets as well.
Okay de-risking things doesn’t sound immediately negative.
Why is de-risking
bad or why does it yield bad outcomes? So, you know, if you want to um you
know, take risk to zero that there will be crime in a particular area, the type of interventions you would have to impose would be pretty significant, right?
It doesn’t sound bad when you phrase it that way necessarily. And that’s why I was phrasing it that way because I think that’s how these people in elite circles tend to think about it.
Ultimately at the end of the day.

They don’t really, you know, if you’re a CIA, for example, you don’t really want to have to be uh more transparent and more accountable uh for things you have done in the past.
I think that’s been pretty clear with their um their history as an as an agency.
Uh, but they they’d be more interested in eliminating the risk for them, that they would have to make those behavioral changes, I guess you could say, as an agency.
So, what would you do to prevent uh that kind of risk from happening?
Would you manipulate the public sphere and the public discussion around the CIA to make those things unpopular? Um, for example, you know, d-risking can take many forms, but I think these people uh generally tend to think of things in ter in terms of risk to uh the activities they’re doing, especially those that tend to be gray area or, you know, potentially illegal.
I think concentrating power in that few hands uh especially in hands that have you know capital so concentrated um up there as well is increasingly problematic.

You know there’s obviously the adage that you know absolute power absolutely corrupts and I think that um that history does tend to sort of show that and I think um you know people particularly in the United States don’t want a small uh handful of the richest people in the world uh controlling and micromanaging their lives.
You know at very minute scales which is what some of these people propose uh to do as life becomes more digitalized and AI has a more prominent role in in people’s lives and in society you know there exists that possibility to do that kind of m micromanaging uh for people’s lives and that sort of follows under the umbrella of what has been referred to as technocracy sort of this idea of elite experts governing and micromanaging society and sort of using that um as a replacement for representative democracy from for example and I think there are a good amount of people in in big tech that think they are uh better suited than elected leaders or the public lead direction lead people’s lives and micromanage people’s lives in particular uh directions.

I certainly don’t agree with that as I said earlier I personally would like people to be empowered to make their own decisions and a lot of that is through education and having the freedom to make those decisions.
And sometimes making your own decisions can lead to failure, maybe not the best outcomes, but I mean that’s how people learn and adapt and evolve.
And I think it’s um necessary for people to be able to do those things.
I mean, you know, as a parent, it’s important to uh, you know, ensure that my kid has the freedom to, you know, not necessarily succeed all the time because you learn important lessons and you build character um, from not getting things right on the first try.
And to have things micromanage to prevent any sort of potential bad outcome, you know, I think could be problematic. And also, you know, having these this small group decide what is a bad outcome versus a good outcome without input from the public.
But having uh that sort of enacted and imposed on the public without their consent, you know, obviously I think there’s hugely significant problem.
But as a parent, I imagine that you do keep your child safe from the worst possible things that they may not be able to anticipate.
Do you think that the government and or the elites, I don’t know if you use those interchangeably or not, um should they be doing the same kind of thing, curbing liberty to some extent to make sure that we don’t drive ourselves off a cliff?

Well, I think the necessary guard re-guard rails perhaps on society like don’t murder someone, for example, are already there.
Do we really need new ones enforced by AI? uh do we need to prevent uh crime before it happens and all of this stuff uh at the expense of um you know being w basically re-negging on the entire constitutional right to privacy you know I don’t really necessarily agree with that and I think people if they want that kind of system sure opt into it allow the government into every aspect of uh your online and
and private life but I don’t think people uh should be forced to do that if they don’t consent to that kind of system.
Personally I think it would be I think consent frankly uh matters and I think a lot of Americans uh agree with that viewpoint considering what we saw uh with backlash to you know uh co 19 policies for example where you know consent um was sort of uh coercively removed in the case of mandates for example.
Well I think people need to be empowered to make their own decisions so if you have a situation where people are manip uh you know trying to manufacture consent through fear um you know uh reporting uh on you know the coercion involved there um and sort of elucidating uh the facts of the situation uh would empower people to make their own decisions if people uh can be shown that there is sort of a fear element here and a coercive and manipulative element to it.
You know I’m sure the decisions they uh would take would be a little different
you know the utility of manufacturing consent via fear is that when people are afraid they tend to not think critically and they tend to act impulsively and emotionally and they tend to look for easy solutions.
So if you have uh you’ve created fear in the public and then you offer a tailormade solution, you know, you can more easily get consent that way.
But if people are empowered to know that the situation has been manipulated to force that outcome, I’m sure a lot of them would make uh different decisions and that that consent would not necessarily be there if they were aware of the manufactured nature of that situation.
I haven’t paid ultra close attention to the executive orders.
Some of my reporting in the leadup to his inauguration and also during the campaign as it related to, you know, the Trump camp in particular are some of his connections to these big tech figures, whether it’s Elon Musk or figures that are, you know, also part of the so-called PayPal mafia, David Saxs being the crypto naazar, for example, and someone like JD Vance having relatively close proximity to Peter
Teal.
Someone at a top HHS position. Jim O’Neal also very close to Peter Teal who has I would argue wildly different views than Robert F. Kennedy who’s supposed to lead HHS, especially in terms of deregulation, biotech and mRNA technology.

You know, what are these things ultimately going to result in?
And I would consider people to consider those connections and also consider that
there’s a high likelihood just like in the first term that there will be a lot
of, you know, deregulation over the next four years.
And I’m sure a lot of people will be happy about that. But I think some people that voted for Trump may not be particularly in the Make America Healthy Again movement where deregulation at the FDA, for example, could result in a flooding of the market with mRNA products, which are very controversial with a significant segment of Trump’s base, for example.
Or when Trump was in office the first time, he deregulated biotech and agriculture pretty significantly.
A lot of people in that same make America healthy again movement are not really into, you know, a proliferation of GMO crops in the food supply.