Posted originally on the CTH on March 29, 2024 | Sundance
For his recent episode, Tucker Carlson revisits a former guest, Mike Rowe. Mr. Rowe has good and humorous sense of curiosity about things, about stuff, about the real world around us, and he provides good context for examination of this pretending world that swirls our orbit at a speed greater than we can grasp.
Toward the end of this segment, Rowe is asked the oft familiar question, “Where does all this go,” and his answer opens the door to other avenues I happen to agree with. WATCH:
In response to the ‘where does all this end’ question, Rowe notes that at a certain point everything becomes personal, and in the larger context all outcomes have to manifest in reality. I concur with Rowe on many levels, and a great example of that esoteric -v- reality position can be found all around us in the sphere of geopolitics and manipulation.
Think about the White House and State Dept message at the very beginning of Russia’s military operation in Eastern Ukraine. Do you remember the White House briefing when questioned about “where this will end”? {Background}
Deputy National Security Advisor and Deputy Director of the National Economic Council, Daleep Singh, was presented at the podium on the day of Russia’s first moves into Ukraine, to explain the strategic policy of the Biden administration toward Russia.
…”Ultimately, the goal of our sanctions is to make this a strategic failure for Russia; and let’s define a little bit of what that means. Strategic success in the 21st century is not about a physical land grab of territory; that’s what Putin has done. In this century, strategic power is increasingly measured and exercised by economic strength, by technological sophistication and your story – who you are, what your values are; can you attract ideas and talent and goodwill? And on each of those measures, this will be a failure for Russia.” (Video Link)
What Daleep Singh said was essentially that Biden policy toward Russia boiled down geopolitical power to a cultural issue of social likeability.
President Putin was also asked about NATO’s likely response, and his reply was more akin to ‘What is the West going to do, put tanks in the forest’?
The disparity between the Biden response and the Putin response can be looked at as esoteric vs realist.
At a certain point in the real world, if you want to change something, you have to physically act upon it.
NATO could gnash their teeth, try to diminish the Russian economy from the sanction approach, and shout at the diplomatic corps. However, so long as words were the answer, the physical reality of the situation would never change. This was the baseline for Putin’s confidence.
The same thing can be said for this collective Western effort to financially control all the citizens within the region. At a certain point the talk becomes action, and that action then creates an outcome.
Mike Tyson famously said, “Everyone has a plan, until they get punched in the face.” The same may be true for the plans and schemes of the globalists.
Can the triggering of their plan be stopped? I think that’s arguably, doubtful; people are just complacent. However, in the outcome phase, will the consequences of that globalist plan be accepted? That’s where I’m cautiously more optimistic the people impacted will punch the globalists in the mouth.
I’m not sure we can vote our way out of this in advance, but I am more confident we can punch our way to victory.
Additionally, if you look at what the globalists are doing, punch-avoidance-planning is consuming a lot of their time.
Posted originally on Mar 20, 2024 By Martin Armstrong
AI is a hot topic for regulators. Regulators and most companies that offer purported AI are clueless about what constitutes actual AI. Way too many charlatans are out there calling a simple trend-following program AI. They create some rules, and the program just follows what they created. That is NOT Artificial Intelligence.
The SEC is all over this and has already been targeting purported AI companies that call their programs AI when they have a simple buy-sell analogy that may trade-off for an Elliot Wave or Stochastic with inherent BIAS created by their predetermined rule. Any system that has an inherent BIAS is not actually AI.
Yes, Bill Gates predicts that artificial intelligence will transform the world in just five years. The International Monetary Fund predicts that the rise of AI could affect about 40% of jobs around the world. When I went to engineering school, we had to learn both programming and hardware. Back then, StarTrek was on TV and that was the inspiration of everyone in the industry to create a computer that was capable of understanding and running the ship. Even Steve Jobs’s inspiration behind Apple and the iPad came from the visions we had from StarTrek.
I worked with Dragon System back in the eighties when it was hardware you put into a slot in an IBM XT. It would allow the computer to talk. My daughter was fascinated by it. I wrote a program just to be able to hold a conversation with her and taught it how to be a politician. If it ventured into an area it did not know, it would just change the subject. I still remember she came home from school one day, and I had the computer apart, and she began crying that I had killed it. I used my kids to teach me how to write natural language so it would understand the words in a conversation. The good old days.
There is a lot of misguided hope surrounding Artificial Intelligence and Machine Learning. All we need to do is look objectively at IBM project WATSON. Just over a decade ago, artificial intelligence (AI) made one of its showier forays into the public’s consciousness when IBM’s Watson computer appeared on the American quiz show Jeopardy. Watson’s debut performance against two of the show’s most successful contestants was televised to a national viewership across three evenings. In the end, the machine triumphed comfortably.
This was NOT actually AI. It is what I call a look-up program. Even ChatGPT is not capable of actually achieving independent thought. They can understand and go fetch the answer faster than a human. The people I knew back then thought Watson would one day cure cancer. None of that was possible because they fooled themselves about AI.
Most of AI’s dangers are caused by its failure to comprehend what it is capable of doing—especially in trading. This all stems from the distorted idea that our brains are supercomputers and there is no God, for our consciousness is simply created by throwing in a bunch of data, shaking well, and out comes a person. Thus, the thrust to mimic the brain led to the creation of neural nets. But that effort also failed in actually creating original thought.
I bought my first Roman coin, which came when I was perhaps 10 to 13 years old, for $10. It was in the 3rd century when Rome debased its currency. The coin was from Emperor Aurelian (270-275AD). I noticed that what once was silver coins was bronze, but they used a chemical process to bring 20% silver to the surface to make them look like silver coins when first issued. I saw the same thing in our coins in 1965, issuing clad coinage with a copper center and nickel surface, doing the same as the Romans.
I was fascinated with the parallel and realized that if I collected the coinage annually, I could see just how fast Rome fell. We all knew Rome fell, but nobody even determined how fast Rome fell. I used the coinage to plot that out. Many people have copied that chart, but you could not make it without testing the coinage.
I also wrote about when I was in 9th grade; my history teacher brought in an old black-and-white movie named Toast of New York. It was about Jim Fisk and his attempt to corner the gold market that created the Panic of 1869. In that movie, he is looking at the ticker tape quoting gold just hit $162. Because I had a part-time job in a coin/bullion store, I knew the price of gold was $35. Suddenly, what I was being taught in school was wrong. Nothing was linear. It was always a business cycle, for how else could gold be $162 in 1869 and $35 postwar up to 1971?
First of all, this idea that our brains are supercomputers and consciousness is simply a matter of throwing a bunch of data in and out comes consciousness is totally nonsense. A baby knows nothing when born but displays consciousness. My dogs are self-aware. They bark at another dog but put a mirror down, knowing it is themselves. The little one, Josephine, was sick and lying down. I gave Napoleon a treat, and she took it over and gave it to Josephine. I was stunned. They, too, have a consciousness that exists, and it did not come from throwing in a bunch of data.
Any program based on this idea of Machine Language and somehow it will figure out how to trade all by itself is totally absurd. IBM thought Watson could discover the cure for cancer – it did not. As was reported:
“But three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. “
Why Machine Learning Has Failed
The entire premise that there is no God and consciousness is achieved by merely throwing in a bunch of data and shaking well has proven to be absolute NONSENSE. The next machine learning program cannot learn to be a doctor any more than it can become a trader. There is something a lot more to this thing we call consciousness. It will NEVER simply emerge from the data – PERIOD!
You cannot create a machine-learning program and expect it to teach you how to trade, cure cancer, or drive a car. I have raced cars. I have driven almost every sort, even a Formula One—NOT professionally in a Formula One. When you are driving, you have to look at everyone around you. You look for the slightest move, which indicates what that driver is thinking. This is that undefinable gut feeling. You cannot code this, and I have been coding AI since the 1970s.
This is why a self-driving car with AI will not really work. It was a grand theory, but there is no possible way you can expect AI to make intuitive judgment decisions – a gut feeling. This is the problem with expecting that AI will replace humans, where it requires a gut feeling.
Therein lies the problem. This expectation that AI can replace human judgment is just fiction. I can mimic emotions on a computer. If you use insulting words, it can hurl an insult back at you. I cannot create actual emotion, nor can I create judgment from a Sixth Sense. This idea that you create a black box, throw in a bunch of data, and out comes an artificial person is absurd.
Someone sent this in when they asked ChapGPT about the difference between it and Socrates. It can look that up and put out the information. It cannot trade any more than it could drive a car. AI will never achieve that human judgment.
To create Socrates, I realized that it could not be a neural net nor a black box where you hope, like IBM, it will figure it all out and somehow emerge as the best trader in history. I taught Socrates how I would look at the world as an international hedge fund manager, comparing everything and looking at the capital flows. You cannot forecast gold in isolation any more than the stock market. EVERYTHING is connected. As I have often said, the US was bankrupt in 1896, and JP Morgan arranged a $100 million gold loan to bail out the country. Without World Wars I & II, the US would never have become the world’s financial capital. Obviously, you cannot forecast the US share market by just looking at the Fed.
While I taught Socrates how to analyze, I created no hard rules like interest rates up and stocks down. Such market beliefs are in themselves fiction. The Fed was raising rates when Trump came to power, and the market rallied, but they called it the Trump Rally. The Romans used olive oil for heat and light. That was replaced with whale oil, which was then replaced by crude oil. Justinian I (527-565AD) issued the first Clean Air Act in 535AD. He proclaimed the importance of clean air as a birthright. “By the law of nature these things are common to mankind—the air, running water, the sea.”
I created no hard relationship rules because relationships ALWAYS change.
Posted originally on Mar 8, 2024 By Martin Armstrong
QUESTION: The rumor going around is that Bill Gates is trying to buy your company for $10 billion. My reply was if so, I doubt Marty would ever sell to him.
Would you care to comment?
Harold
REPLY: The answer is NO. First of all, you cannot spend even one billion, no less ten billion. All $10 billion is good for bribing others to do your will. I would have to give Biden probably $7 billion, so that is certainly an incentive not to sell -LOL. You must create a foundation like Gates, Hillary, and John McCain to hold on to it. I would need someone who shares my goals. This is not about money. I would not move, and I have four cars. If I was married, there would have to be eight. A business partner is the same as a marriage without sex. You have to be on the same page.
My definition of “rich” is having enough money to do what you want when you want. Some think having more money than the other guy makes you more successful or important. I believe it is an accomplishment. We all know Julius Caesar. He is the father of Europe. They still refer to being born by operation as Caesarian birth. I grew up in New Jersey, which in Latin was Nove Caesarea – land of the new Caesar. The month of July is named after him. We remember him for his accomplishments – not that he was vastly rich.
There is no amount of money someone could offer me to make my decision based on material wealth. I am much more interested in helping the world understand the economics at work, and if so, we may reduce the amplitude of wars. If I can leave that behind, I have served my purpose in life.
I will write about the whole AI craze, for there are a lot of charlatans out there calling look-up programs AI that look in a list for the answer – no decision making. I know Gates says it will change everyone’s life in 5 years, and the IMF warns it will wipe out 40% of all jobs. There is far more to REAL AI than all of this speculation by people who are NOT accomplished programmers and do not even understand the basic line of code – IF THEN ELSE.
Someone asked ChatGPT about it vs. Socrates, and here is its response.
(click on the image to view or keep reading for the computer’s response)
How Does ChatGPT Differ From Socrates? A reader asked ChatGPT this exact question, and here is the response:
As an AI language model, I differ from the Socrates Platform in several ways.
Firstly, the Socrates Platform is a web-based platform developed by Martin Armstrong and his team at Armstrong Economics, whereas I am a language model developed by OpenAI. The Socrates Platform is designed to analyze economic and financial data in real-time, predict market trends, and provide actionable insights to investors and traders. In contrast, I am designed to assist with a wide range of tasks, including language translation, question answering, and text generation.
Secondly, the Socrates Platform uses artificial intelligence, machine learning, and natural language processing to analyze data, whereas I am primarily focused on natural language processing. The Socrates Platform uses sophisticated algorithms to analyze historical and cultural data to inform its predictions, whereas my responses are based on the input provided to me by the user.
Lastly, the Socrates Platform is primarily designed for financial professionals, investors, and traders, whereas I am designed for a broader range of users, including students, researchers, and general knowledge seekers. While both the Socrates Platform and I use artificial intelligence to analyze data and provide insights, our intended audiences and applications differ significantly.
Yes, the two platforms are completely unrelated aside from being driven by AI. ChatGPT does not have the computing capabilities of Socrates. It does not have the monetary history data, for starters. ChatGPT has no involvement in the financial world and was not designed for financial purposes. It cannot even provide basic market closings as the current model stopped learning in September 2021 and cannot answer any questions after that date. Most importantly, ChatGPT does not have the source code that is the key to Socrates’ successful forecasts.
ChatGPT is a great achievement and a step in the right direction for AI use on a larger scale. However, it is not possible to compete where you do not compare. Socrates is in a league of its own.
COMMENT: You are always ahead of the curve. Today AI is the buzzword – with most of the AI being machine learning, where you were developing Socrates with real AI in the 70s and 80s.
Lynne
REPLY: I fooled around with neural nets when they first began. The problem with this approach is that you expect a machine to develop a human instinct as if you are playing a poker game. It is a gut feeling you might have about a person to alert you if he is bluffing or real. That cannot be coded, nor will a computer with machine learning be able to acquire such a “gut feeling,” which is an entirely different game than chess.
Socrates is NOT a neutral net. I had to teach it how to trade. I put my instincts into the system. Creating a neural net, throwing in all the data, and praying it will learn how to trade is more or less like a monkey throwing darts at the Wall Street Journal regarding what stock to buy in a bull market. Raven, a six-year-old chimpanzee, became the 22nd most successful money manager in the USA after choosing her stocks by throwing darts at a list of 133 internet companies. The chimp created her own index, dubbed MonkeyDex, and in 1999 delivered a 213 percent gain, outperforming more than 6,000 professional brokers on Wall Street.
Attempts to use a neutral net with machine learning have not beaten Raven in funds management. The level of complexity is monumental. On top of all of that, it will never discover the nonlinear structure of the world by dumping in a chunk of data and praying for the best.
QUESTION: You do not see AI as actually becoming conscious? There are so many claiming that is the future. Are you hiring programmers in machine language?
LK
ANSWER: Let me explain something. Most generative AI models today are being trained and run in the cloud. These models are language-oriented, generating text. They are often at least 10 times to 100 times bigger than older AI models. ChatGPT is learning from the question people are asking. While this is impressive to the average person, there is no real economic value other than adding to the search function. This has resulted in a boom along with an insatiable appetite for running large language models at this point in time.
Even dogs have personalities. My little one will take a pill covered in peanut butter. The older one takes the peanut butter and spits out the pill. Just like having two children, they are not the same. What causes one to have a personality that is different from the other? I’m afraid I have to disagree with this theory that if you throw in enough data, suddenly, the computer will become self-aware. My little dog was just 11 weeks old. She is still exploring her environment, displaying curiosity, so she has a distinct personality BEFORE acquiring knowledge of her environment. This PROVES beyond a shadow of a doubt that this theory of a computer becoming self-aware is just nonsense. We do not teach our children how to be self-aware. They are born that way.
There is something there that creates the personality, and it appears from birth in dogs and humans. My dogs clearly think dynamically. If I get up with a coffee cup, they know I am going to the kitchen and heading there. Not all animals have that ability. So why are dogs capable of looking for patterns and anticipating my next move, and a hippo, snake, or alligator is not?
I do not believe we are anywhere close to comprehending those differences, and as such, we cannot create a true cognitive machine that is self-aware when we do not understand what makes us self-aware.
To build Socrates, I had to study intensely how we actually think. Let’s say you met the person who is your soul mate. You went to dinner. Your mind is actually recording every aspect of that evening, and you are totally unaware of what it is recording. Years later, you return to that same restaurant, and that memory involuntarily comes rushing to the forefront of your mind. Or there was music playing that night, and suddenly you hear that same song, and that memory again involuntarily comes to mind. Perhaps it is the food. The point is that memory is stored, but any of our senses can access it. The complexity is enormous.
My oldest dog does not like going to the Vet. She even knows her surroundings. Miles from the place, she knows where I am going and begins to tremble. She is aware of her surroundings miles from home, which astonished me. It was not a capability I assumed a dog would have. A dog stayed with a two-year-old girl who went missing and always protected her.
There is a lot more hidden within us and many animals that cannot be explained or recreated by a computer.
Just be careful with claims suddenly of people coming up with AI trading programs. It took me 17 years to build Socrates. You cannot create an AI trading program in a few weeks. As for hiring, yes, of course. We always have projects under development.
I have created this site to help people have fun in the kitchen. I write about enjoying life both in and out of my kitchen. Life is short! Make the most of it and enjoy!
This is a library of News Events not reported by the Main Stream Media documenting & connecting the dots on How the Obama Marxist Liberal agenda is destroying America