Thursday, August 15, 2024

The Harris Honeymoon Will End, It’s What Comes After That Will Matter


Kamala Harris is enjoying an unprecedented political honeymoon since she helped stab Joe Biden in the back. If you can’t win a party’s nomination, steal one – kind of like how she got appointed by her married boyfriend to her first job in politics…without having to have sex with someone 31 year older than her. But the honeymoon will end. How the Trump campaign deals with it when it does will be the difference-maker in the 2024 election.

Polling has so good for Kamala Harris you really have to wonder if she was part of the Biden administration or not. How else could someone who was a part of a presidential administration historically unpopular, is now “winning” in most of the polls?

First off, beyond telling you what the sentiments of votes are long before anyone votes, polls are meaningless. You want to be ahead in them, obviously, but they are the wind this early – shifting in direction and intensity all the time. People aren’t really thinking about voting yet.

Second, they aren’t wrong. You can’t dismiss polls as being wrong, they are a pretty accurate capture of the moment in time they were conducted. That doesn’t make the more meaningful, election day is months away and any poll you read today was conducted last week. Those two time periods only have in common that they are points in time, nothing more. 

It’s not about what was, or even what is, elections are about what is next.

That being said, past is prologue. 

The Trump campaign needs to be on TV now, running ads on networks and shows that voters they hope to win watch, not Fox News.

Nothing against Fox, but if you’re a Republican presidential nominee trying to win over Fox viewers you’ve got bigger problems than you’ll ever be able to overcome.

This election is about appealing to 5-7 percent of the electorate who are open to voting for either candidate in the 5-7 swing states that will choose the next president. Nothing else matters. 

Never forget, Hillary Clinton spent a lot of money in Illinois in the final days of the 2016 election, even though she was going to easily win the state, because she wanted to run up the popular vote total in a friendly state so she could claim a large mandate when she assumed office. Had she spent that cash and effort in the swing states she barely lost, she might not have lost. 

What you do matters, and so does where you do it. 

There is a chance right now for the Trump campaign to make inroads in Pennsylvania they desperately need to make. And what they can do in PA can be replicated in other swing states. 

This woman perfectly expressed what is THE issue this election: inflation. “They’re killing us without killing us,” she told NBC News while tearing up about her difficulty in supporting her family.

She is not alone, not by a longshot. But she, like millions more Americans, do not associate Kamala Harris, or even Joe Biden, with those struggles. When asked who she blamed her answer was as telling as it was terrifying: The federal government. 

While that seems like a good answer, she doesn’t appear to associate Harris, Biden or even Democrats with the federal government. It’s a disconnect that will be an issue, if not changed. It’s like the senior citizens who carry signs telling the government to get its hands off their Medicare. Well, Medicare IS the government. And government – as run by Biden, Harris and Democrats – IS the problem with the economy. 

Republicans and the Trump campaign need to take a break from the social media videos, no matter how good they are, and reach swing voters where they actually live – television.

They need to be on the air in Philadelphia, Pittsburgh, Detroit, Madison, Atlanta, etc., on networks Republicans tend to avoid – Bravo, HGTV, Paramount and so on, along with the broadcast networks to reach that Philly woman, her friends and as many people like her as possible. And the message has to be along the lines of, “We know how tough Joe and Kamala made it, and we will fix it and make it like it was before.” 

Trump’s campaign needs to marry Kamala to prices and inflation, using that video to do it. That woman’s fear is universal, as is her struggle. And they need to hammer that message every single day till the election. They can hit on other issues too, and still preach to the choir, but they need to reach black voters in urban areas like that woman with the message that Democrats are the cause of their problems and he has a plan to make it better. It must be subtle – continually calling Harris stupid isn’t going to work and will turn people off (Trump already has the votes of everyone that would work on). 

If they can bridge that gap between people’s pain over inflation and the fact that Kamala Harris and Democrats caused it, the polls with change by a lot and permanently. If they don’t even try or drown out their own message with irrelevant tangents or unimportant personal attacks we can look forward to 4 more years of hell. Whether or not we can pull up from a nosedive that continues for another 4 years is anyone’s guess, but sooner or later, if Democrats maintain control, we will hit the mountain. 



Be Prepared for Trump to Be Sentenced to Prison in September

Matt Vespa reporting for Townhall 

National Review’s Andrew McCarthy, a former assistant US Attorney, is giving us plenty of warning right now: be prepared for Judge Juan Merchan to sentence Trump to jail next month. Sentencing in the politically motivated hush money sham is set to be handed down on September 18, two days after early voting begins in the crucial battleground state of Pennsylvania. McCarthy said the intent is clear: smear Trump as a felon who just got sentenced to the slammer weeks before Election Day (via Fox News):

The Trump defense team has been trying to stave off sentencing. And the lawyers have what, in a normal case, would be real ammunition.

On July 1, the U.S. Supreme Court held that presidents (including former presidents) are (a) presumptively immune from criminal prosecution for any official acts taken as president, and (b) absolutely immune if the official acts are core constitutional duties of the chief executive. The court instructed that this immunity extends not only to charges but to evidence. That means prosecutors are not just barred from alleging official presidential acts as crimes; they are further prohibited from even using such acts as proof offered to establish other crimes. 

There is no denying that Bragg’s prosecutors used some of Trump’s official acts to prove their case. Indeed, they called as witnesses two of Trump’s White House staffers. 

[…] 

If we may read the tea leaves, Merchan has already decided that he will deny Trump’s immunity motion. There is, moreover, a high likelihood that he will impose a prison sentence against Trump right after that. 

By the time he’d issued his letter last week, Merchan had had weeks to mull over the Supreme Court’s immunity decision and Team Trump’s subsequent brief arguing that the guilty verdicts should be tossed out. He told the parties to get ready for sentencing anyway. Obviously, if Merchan had any intention of vacating the verdicts, or of recusing himself, he would not have stuck to the sentencing date. 

I suspect that Merchan will rationalize that Trump (a) was not charged based on official presidential acts, and (b) would have been convicted even if Bragg’s prosecutors had not introduced arguably immunized evidence. Such a ruling might be wrong, especially on the latter point (at trial, prosecutors described some of the testimony from Trump staffers as "devastating"); but Merchan made so many outrageous rulings in the case that it would be foolish to expect him to change course now.

The scary part about McCarthy’s piece is that it rehashes what we already know: this is politics, not justice. Therefore, the judges and the courts going rogue to eliminate a political threat to the Democratic Party is not out of the question—it’s already happened. And it’s not just McCarthy making these points. CNN’s Elie Honig has penned damning articles and delivered similar commentaries on-air about the shoddiness of this case. Legally speaking, Honig added that what Trump was convicted of, falsification of records, is no worse of a conviction than for those who shoplifted a Snapple at a local bodega. Also, the statute of limitations had expired. 

He also said that some of the evidence cited in the hush money trial included Trump’s discussions with aides. Honig also has been critical of the “other crime” angle used by the prosecution to circumvent the statute of limitation on falsifying records charges.



Why Should The Liberals Get Credit For ‘Fixing’ Problems They Created

 They seem unwilling to acknowledge that in democratic nations, incompetent governments are punished through defeat at the ballot box, rather than being given endless mulligans.

Liberal MP Randy Boissonault – not to be confused with “other Randy” – appears to believe the Liberals deserve credit for how they are handling problems with the temporary foreign workers program:

The problem here is that like everything else the Liberals say, it all falls apart when we look at the data.

Notice how Boissonnault is trying to get people to focus on actions taken by the government to respond to the unsustainable surge in temporary foreign workers, without acknowledging when that surge took place.

And, notice how he shares a clip of Pierre Poilievre without any timestamp or context.

There’s a good reason for that:

The Liberals are the ones who implemented the policy of rapidly expanding Canada’s temporary foreign worker intake:

Pierre Poilievre didn’t do this.

The Conservatives didn’t do this.

Justin Trudeau and Jagmeet Singh did this.

The Liberals & NDP did this.

And now – having done serious economic damage to the country and having made the job market even more hostile for many Canadians (including many young Canadians struggling to find work), the Liberals now expect to get credit for taking some small steps to address a problem they created.

I don’t think so.

In democracies, governments don’t get endless chances to stay in power and win praise for fixing problems they caused. Instead, they are defeated and others are given the chance to govern.

Spencer Fernando

CCP Develops AI Weapons, Ignoring Global Risks

The development of AI weapons may be equivalent to the nuclear revolution, according to Bradley Thayer, a senior fellow at the Center for Security Policy.

Cutting-edge weapons powered by artificial intelligence (AI) are emerging as a global security hazard, especially in the hands of the Chinese Communist Party (CCP), according to several experts who spoke to The Epoch Times.

Eager to militarily surpass the United States, the CCP is unlikely to heed safeguards related to lethal AI technologies, which are increasingly dangerous in their own right, the experts said. The nature of the technology is prone to feeding some of the worst tendencies of the human psyche in general.

“The implications are quite dramatic,“ said Bradley Thayer, a senior fellow at the Center for Security Policy, an expert on a strategic assessment of China and a contributor to The Epoch Times. ”And they may be the equal of the nuclear revolution.”

Killer Robots

The development of AI-powered autonomous weapons is rapidly progressing, according to Alexander De Ridder, an AI developer and co-founder of Ink, an AI marketing firm.

“They’re becoming quickly more efficient and quickly more effective,” he told The Epoch Times, adding that, however, “they’re not at the point where they can replace humans.”

Autonomous drones, tanks, ships, and submarines have become a reality alongside such exotic iterations as quadruped robot dogs armed with machine guns, already seen in China.

Even AI-powered humanoid robots, the stuff of sci-fi horrors, are in production. Granted, they’re still rather clumsy in the real world, but they won’t be for long, De Ridder said.

“The capabilities for such robots are quickly advancing,” he said.

Once these machines reach marketable usefulness and reliability, China is likely to direct its manufacturing might toward mass production, according to De Ridder.

“The market will be flooded with humanoid robots, and then it’s up to the programming how they'll be used,” he said.

That would mean military use, too. “It’s kind of inevitable,” he said.

Such AI-powered robots are very good at using optical sensors to identify objects, including human beings, said James Qiu, founder of GIT Research Institute and former CTO at FileMaker. And that gives them the potential to be very effective killing machines.


A Cambodian officer inspects drones and a machine-gun equipped robot battle “dog” that are displayed for Chinese soldiers during a joint drill at a military police base in Kampong Chhnang Province, Cambodia, on May 16, 2024.

On a broader level, multiple nations are working on AI systems that are capable of informing and coordinating battlefield decisions—essentially acting as electronic generals, according to Jason Ma, a data research lead at a multinational Fortune 500 company. He asked not to disclose the name of his company, to prevent any impression he was speaking on its behalf.

The People’s Liberation Army recently conducted battle exercises in which an AI was directly placed in command, and the U.S. military also has projects in this area, Ma said.

“It’s a very active research and development topic,” he said.

The need is obvious; battlefield decisions are informed by a staggering amount of data, from historical context and past intelligence, to near-real-time satellite data, to millisecond-by-millisecond input from every camera, microphone, and whatever sensor on the field. It’s “very hard” for humans to process such disparate and voluminous data streams, he said.

“The more complex the warfare, the more important part it becomes how can you quickly integrate, summarize all this information to make the right decision, within seconds, or within even sub-second,” he said.

Destabilization

AI weapons are already redefining warfare, but the experts who spoke to The Epoch Times agreed that the consequences will be much broader. The technology is making the world increasingly volatile, Thayer said.

At the most rudimentary level, AI-powered weapon targeting will likely make it much easier to shoot down intercontinental ballistic missiles, detect and destroy submarines, and shoot down long-range bombers. That could neutralize the U.S. nuclear triad capabilities, allowing adversaries to “escalate beyond the nuclear level” with impunity, he said.

“AI would affect each of those components, which we developed and understood during the Cold War as being absolutely essential for a stable nuclear deterrent relationship,” he said.

“During the Cold War, there was a broad understanding that conventional war between nuclear powers wasn’t feasible. ... AI is undermining that, because it introduces the possibility of conventional conflict between two nuclear states.


Iran’s Revolutionary Guards fire test missiles during the first phase of military manoeuvres in the central desert outside the city of Qom, on Nov. 2, 2006. (-/Fars News/AFP via Getty Images)

“AI is greatly affecting the battlefield, but it’s not yet determinative.”

If AI capabilities were to reach “the effect of nuclear war without using nuclear weapons,” it would sit the world on a powder keg, he said.

“If that’s possible, and it’s quite likely that it is possible, then that’s an extremely dangerous and destabilizing situation, because it compels somebody who’s on the receiving end of an attack to go first—not to endure the attack, but to aggress.”

In the warfare lexicon, the concept is called “damage limitation,” he said. “You don’t want the guy to go first, because you’re going to get badly hurt. So you go first. And that’s going to be enormously destabilizing in international politics.”

Killer robots and drones are not the only cause for concern; various unconventional AI weapons could be developed, such as one to find vulnerabilities in critical infrastructure including the electric grid or water supply systems.

Controlling the proliferation of such technologies is a daunting task, given that AI itself is just a piece of software. Even the largest models fit on a regular hard drive and can run on a small server farm. Simple but increasingly lethal AI weapons, such as killer drones, can be shipped in parts without raising alarm.

“Both vertical and horizontal proliferation incentives are enormous, and it’s easily done,” Thayer said.

De Ridder pointed out that the Chinese state wants to be seen as responsible on the world stage.

But that hasn’t stopped the CCP from supplying weapons or aiding the weapons programs of other regimes and groups that aren’t so reputationally constrained, other experts noted.

For example, the CCP could supply autonomous weapons to terrorist groups in order to tie up the U.S. military with endless asymmetrical conflicts. The regime could even keep its distance by merely supplying the parts, letting proxies assemble the drones, much like Chinese suppliers provide fentanyl precursors to Mexican cartels and let them manufacture, ship, and sell the drugs.

The CCP has a long history of aiding Iranian weapons programs, while Iran in turn supplies weapons to terrorist groups in the region.

“There would be little disincentive for Iran to do this,” Thayer said.

Human in the Loop

It’s generally accepted, at least in the United States and among its allies, that the most crucial safeguard against AI weapons wreaking havoc is keeping a human in control of important decisions, particularly the use of deadly force.

A military operator launches a Polish reconnaissance drone during test flights in the Kyiv region of Ukraine on Aug. 2, 2022.


“Under no circumstances should any machines autonomously, independently, be allowed to take a human life—ever,” De Ridder said.

The principle is commonly referred to with the phrase “human in the loop.”

“A human has a conscience and needs to wake up in the morning with remorse and the consequences of what they’ve done, so that they can learn from it and not repeat atrocities,” De Ridder said.

Some of the experts pointed out, however, that the principle is already being eroded by the nature of combat transformed by AI capabilities.

In the Ukraine war, for example, the Ukrainian military had to equip its drones with some measure of autonomy to guide themselves to their targets, because their communication with human operators was being jammed by the Russian military.

Such drones only run simpler AI, Ma said, given the limited power of the drone’s onboard computer. But that may soon change, as both AI models and computers are getting faster and more efficient.

Apple is already working on an AI that could run on a phone. “It’s highly likely it will be in the future put into a small chip,” he said.

Moreover, in a major conflict where hundreds or perhaps thousands of drones are deployed at once, they can share computational power to perform much more complex autonomous tasks.

“It’s all possible,” he said. “It’s gotten to the point where it’s not science fiction; it’s just [a matter of] if there is a group of people who want to devote the time to work on that. It’s tangible technology.”

Removing human control out of necessity isn’t a new concept, according to James Fanell, former naval intelligence officer and an expert on China.

He gave the example of the Aegis Combat System deployed on U.S. guided missile cruisers and destroyers. It automatically detects and tracks aerial targets and launches missiles to shoot them down. Normally, a human operator controls the missile launches, but there’s also a way to switch it to automatic mode, such as when there’s too many targets for the human operator to track. The system then identifies and destroys targets on its own, Fanell said.

In mass drone warfare, where an AI directs thousands of drones in a coordinated attack, the side that gives its AI autonomy to shoot will gain a major speed advantage over the side that requires a human to approve each shot.

“On the individual shooting level, people have to give up control because they can’t really make all the decisions so quickly,” Ma said.

De Ridder pointed out that a drone shooting another drone on its own would be morally acceptable. But that could unleash a lot of autonomous shooting on a battlefield where there may be humans too, opening the door to untold collateral casualties.

South Korea flies military drones fly in formation during a U.S.–South Korea joint military drill at Seungjin Fire Training Field in Pocheon, South Korea, on May 25, 2023.

No Rules

Whatever AI safeguards may be practicable, the CCP is unlikely to abide by them anyway, most of the experts agreed.

“I don’t really see there will be any boundaries for China to be cautious about,” Ma said. “Whatever is possible, they will do it.”

“The idea that China would constrain themselves in the use of it, I don’t see that,” Fanell said. “They’re going to try to take advantage of it and be able to exploit it faster than we can.”

The “human in the loop” principle could simply be reinterpreted to apply to “a bigger, whole battle level” rather than “the individual shooting level,” Ma said.

But once one accepts that AI can start shooting on its own in some circumstances, the principle of human control becomes malleable, Fanell said.

“If you’re willing to accept that in a tactical sense, who’s to say you won’t take it all the way up to the highest level of warfare?” he said.

“It’s the natural evolution of a technology like this, and I’m not sure what we can do to stop it. It’s not like you’re going to have a code of ethics that says in warfare, [let’s abide by] the Marquess of Queensberry Rules of boxing. It’s not going to happen.”

Even if humans are kept in control of macro decisions, such as whether to launch a particular mission, AI can easily dominate the decision-making process. The danger wouldn’t be a poorly performing AI, but rather one that works so well that it instills trust in the human operators.

De Ridder was skeptical of predictions about superintelligent AI that vastly exceeds human capabilities. However, he did acknowledge that AI exceeds humans in some regards, particularly speed; it can crunch mountains of data and spit out a conclusion almost immediately.

And it’s virtually impossible to figure out how exactly an AI comes to its conclusions, according to Ma and Qiu.

De Ridder said that he and others are working on ways to restrict AI to a human-like workflow so that the individual steps of its reasoning are more discernible.

But given the incredible amount of data involved, it would be impossible for the AI to explain how each piece of information factored into its reasoning without overwhelming the operator, Ma said.

“If the human operator clearly knows this is a decision [produced] after the AI processed terabytes of data, he won’t really have the courage to overrule that in most cases. So I guess, yes, it will be formality,” he said.

“‘Human in the loop’ is a comfortable kind of phrase, but in reality, humans will give up control quickly.”

A military operator works on board the French navy patrol airplane Atlantique 2 on mission above the Baltic Sea on June 16, 2022.

Public Pressure

All the experts agreed that public pressure is likely to constrain AI weapon development and use, at least in the United States.

Ma gave the example of Google terminating a defense contract over the objections of its staff. He couldn’t envision an analogous situation in China, though.

Qiu agreed. “Anything inside China is a resource the CCP can leverage,” he said. “You cannot say, ‘Oh, this is a private company.’ There is no private company per se [in China].”

Even the CCP cannot dispose of public sentiment altogether, De Ridder said.

“The government can only survive if the population wants to collaborate,” he said.

But there’s no indication that the Chinese populace sees AI military use as an urgent concern. On the contrary, companies and universities in China appear to be eager to pick up military contracts, Ma said.

De Ridder called for “an international regulatory framework that can be enforced.”

It’s not clear how such regulations could be enforced against China, which has a long history of refusing any limits on its military development. The United States has long vainly attempted to bring China to the table on nuclear disarmament. Recently, China refused a U.S. request to guarantee that it wouldn’t use AI for nuclear strike decisions.

If the United States regulates its own AI development, it could create a strategic vulnerability, multiple experts suggested.

“Those regulations will be very well studied by the CCP and used as an attack tool,” Qiu said.

Even if some kind of agreement is reached, the CCP has a poor track record of keeping promises, according to Thayer.

“Any agreement is a pie crust made to be broken,” he said.

Chinese military delegates arrive at the closing session of the 14th National People's Congress at the Great Hall of the People in Beijing on March 11, 2024.

Solutions

De Ridder said he hopes that perhaps nations would settle for using AI in less destructive ways.

“There’s a lot of ways that you can use AI to achieve your objectives that does not involve sending a swarm of killer drones to each other,” he said. “When push comes to shove, nobody wants these conflicts to happen.”

Other experts said that the CCP wouldn’t mind starting such a conflict—as long as it would see a clear path to victory.

“The Chinese are not going to be constrained by our ruleset,” Fanell said. “They’re going to do whatever it takes to win.”

Reliance on the whispers of an AI military adviser, one that instills confidence by processing mountains of data and producing convincing battle plans, could be particularly dangerous, as it may create a vision of victory where there previously wasn’t one, according to Thayer.

“You can see how that might be very attractive to a decision maker, especially one that is hyper-aggressive, as is the CCP,” Thayer said. “It may make aggression more likely.”

“There’s only one way to stop it, which is to be able to defeat it,” Fanell said.

Chuck de Caro, former consultant for the Pentagon’s Office of Net Assessment, recently called for the United States to develop electromagnetic weapons that could disable computer chips. It may even be possible to develop energy weapons that could disable a particular kind of chips, he wrote in an op-ed for Blaze Media.

“Obviously, without functioning chips, AI doesn’t work,” he wrote.

An AI chip made by Tongfu Microelectronics is displayed during the World Semiconductor Congress in Nanjing, China, on July 19, 2023.

Another option might be to develop an AI superweapon that could serve as a deterrent.

“Is there an AI Manhattan Project that the U.S. is doing that can create the effect that Nagasaki and Hiroshima would have on the PRC and the Chinese Communist Party, that would bring them to the realization that ‘OK, maybe we don’t want to go there. This is mutually assured destruction’? I don’t know. But that’s what I would be [doing],” Fanell said.

That could leave the world in a Cold War-like stand-off—hardly an ideal state, but one likely seen as preferable to abnegating military advantage to the CCP.

“Every country knows it’s dangerous, but nobody can stop because they are afraid they will be left behind,” Ma said.

De Ridder said it might take a profound shock to halt the AI arms race.

“We might need like a world war, with immense human tragedy, to ban the use of autonomous AI killing machines,” he said.

Update: This article has been updated with additional information from Bradley Thayer.
https://www.theepochtimes.com/article/undeterred-ccp-to-ignore-risks-of-ai-weapons-experts-say-5700338?utm_source=China_article_paid&src_src=China_article_paid&utm_campaign=China-2024-08-15-ca&src_cmp=China-2024-08-15-ca&utm_medium=email&est=F0x164kftEMxHADAcbwWjj%2B3DFoC9dv0APTD5vAYWUPOB%2FTEPAxwRlN%2BH2Ub5Z6YW3oz&utm_term=news5&utm_content=5





r, a senior fellow at the Center for Security Policy.