This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Archives (Index) |
This page is archived by ClueBot III.
|
This date is everywhere on the internet but doesn't seems to corresponds to any specific event. Could we explain this date in more details in the article? — Preceding unsigned comment added by Jcolafrancesco (talk • contribs) 16:51, 27 December 2020 (UTC)
AI 'winter' necessarily follows from the spruiking of a concept having a basis only in the human imagination and no basis whatsoever in reality. There is no such thing as artificial intelligence. Never has been. And never will be. At least not by simulation implemented using digital computers. This is a necessary corollary arising from the study and understanding of the nature of deterministic computing. It is NOT possible for any deterministic computational system (eg an electronic digital computer) to emulate sentience (ie 'intelligence') in any way, shape or form. Persons who have studied and understoo\d computer science know this and do not expound 'artifical intelligence' using digital computers because they know that - by definition - it is NOT possible. And they possess intellectual honesty. However those who lack this understanding and/or do not possess intellectual honesty, or possess intellectual dishonesty DO expound the notion of 'artificial intelligence' using deterministic computing devices. Such people are examples of the phenomenon known as 'snake-oil salesmen'; that is, people seeking to enrich themselves by promoting a fraudulent notion among the ignorant. Currently there are very many persons promoting the notion of 'artificial intelligence' in a manifestly fraudulent manner only in order to enrich themselves.
The notion of 'artificial intelligence' is precisely, and only, that - a notion. As 'time travel' is a notion. Artificial intelligence has no [present] manifestation in reality because it CANNOT have a manifestation in [present] reality, just as time travel has no basis in [present -sic] reality because it CANNOT have any basis in [present] reality. — Preceding unsigned comment added by 122.151.210.84 (talk) 02:13, 27 June 2022 (UTC)
Chloe 2400:AC40:60B:D5A3:C00E:2502:7F58:8C24 (talk) 22:50, 30 June 2022 (UTC)
I suggest that the redirect to this page from Quantum winter should be removed, and a page about the phenomenon called "Quantum winter" be created instead.
Se discussion on Talk:Quantum winter.
Liiiii (talk) 07:56, 26 March 2023 (UTC)
The current AI Spring has created some undeniable content inasmuch as the increasing cadence of development and explosion of competition to create the next big thing and pushback from industry and government to impede progress. Is this on anyone's radar? Hcsnoke (talk) 15:17, 11 April 2023 (UTC)
Investment:250000,Cmp:457,Target:533,Stop:401 loss Find Qty Profit loss In bank nifty 182.48.209.195 (talk) 18:29, 25 June 2023 (UTC)
Cutting this because (1) I think the article is better if tighten it just to cover the history. (2) These sections are each at very different levels of reliability and notability. It would take a good editor and a lot of research to figure out exactly what's notable and to make this section WP:comprehensive. (3) My (personal) impression is that there are almost as many theories as there were observers, and that there is no way to settle the issue definitively.
Anyone disagrees, feel free to take it on. ---- CharlesTGillingham (talk) 00:33, 20 August 2023 (UTC)
Several explanations have been put forth for the cause of AI winters in general. As AI progressed from government-funded applications to commercial ones, new dynamics came into play. While hype is the most commonly cited cause, the explanations are not necessarily mutually exclusive.
Hype
The AI winters can[citation needed] be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified[citation needed] by the railway mania and dotcom bubble. In a common pattern in the development of new technology (known as hype cycle), an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress cannot keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.[citation needed]
For example, in the 1960s the realization that computers could simulate single-layer neural networks led to a neural-network hype cycle that lasted until the 1969 publication of the book Perceptrons which severely limited the set of problems that could be optimally solved by single-layer networks. In 1985 the realization that neural networks could be used to solve optimization problems, as a result of famous papers by Hopfield and Tank,[1][2] together with the threat of Japan's fifth-generation project, led to renewed interest and application.
Institutional factors
Another factor is AI's place in the organisation of universities. Research on AI often takes the form of interdisciplinary research. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.
Economic factors
Downturns in a country's national economy cause budget cuts in universities. The "core contents" tendency worsens the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK,[3] when universities had to make cuts and the question was only which programs should go.
Insufficient computing capability
Early in the computing history the potential for neural networks was understood but it has never been realized. Fairly simple networks require significant computing capacity even by today's standards.
Empty pipeline
It is common to see the relationship between basic research and technology as a pipeline. Advances in basic research give birth to advances in applied research, which in turn leads to new commercial applications. From this it is often argued that a lack of basic research will lead to a drop in marketable technology some years down the line. This view was advanced by James Hendler in 2008,[4] when he claimed that the fall of expert systems in the late '80s was not due to an inherent and unavoidable brittleness of expert systems, but to funding cuts in basic research in the 1970s. These expert systems advanced in the 1980s through applied research and product development, but, by the end of the decade, the pipeline had run dry and expert systems were unable to produce improvements that could have overcome this brittleness and secured further funding.
Failure to adapt The fall of the LISP machine market and the failure of the fifth generation computers were cases of expensive advanced products being overtaken by simpler and cheaper alternatives. This fits the definition of a low-end disruptive technology, with the LISP machine makers being marginalized. Expert systems were carried over to the new desktop computers by for instance CLIPS, so the fall of the LISP machine market and the fall of expert systems are strictly speaking two separate events. Still, the failure to adapt to such a change in the outside computing milieu is cited as one reason for the 1980s AI winter.[4]
References
CharlesTGillingham (talk) 00:33, 20 August 2023 (UTC)
The article discusses the origin of the term in the second paragraph. I reverted an edit which changed:
To:
It's important that (1) they were leaders (2) they had already survived one "winter", because this is the "who" and "why" here, and it's essential. (Lots of people worked in AI in random periods of time in the past, so the second version doesn't actually say anything notable.) The reader needs to know that (1) these are people worth listening to, and that (2) they were speaking from experience.
You could argue that word "survived" is WP:peacock and tone it down to "experienced" or something. That would be fine. ---- CharlesTGillingham (talk) 00:17, 25 August 2023 (UTC)