Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

Words can never express what words can never express.


interests / soc.culture.china / More of my philosophy about the abstraction of the language and about programming and about the most important problem of artificial intelligence and about Reinforcement learning and about collective intelligence and about my views on super-intellige

SubjectAuthor
o More of my philosophy about the abstraction of the language and aboutAmine Moulay Ramdane

1
More of my philosophy about the abstraction of the language and about programming and about the most important problem of artificial intelligence and about Reinforcement learning and about collective intelligence and about my views on super-intellige

<acb70186-bced-465d-bd84-166b271fc132n@googlegroups.com>

  copy mid

https://news.novabbs.org/interests/article-flat.php?id=13364&group=soc.culture.china#13364

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:a37:6817:0:b0:762:1af1:90e0 with SMTP id d23-20020a376817000000b007621af190e0mr183384qkc.7.1686930934765;
Fri, 16 Jun 2023 08:55:34 -0700 (PDT)
X-Received: by 2002:aca:de43:0:b0:39c:bfd3:7e with SMTP id v64-20020acade43000000b0039cbfd3007emr496986oig.10.1686930934377;
Fri, 16 Jun 2023 08:55:34 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Fri, 16 Jun 2023 08:55:34 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <acb70186-bced-465d-bd84-166b271fc132n@googlegroups.com>
Subject: More of my philosophy about the abstraction of the language and about
programming and about the most important problem of artificial intelligence
and about Reinforcement learning and about collective intelligence and about
my views on super-intellige
From: aminer68@gmail.com (Amine Moulay Ramdane)
Injection-Date: Fri, 16 Jun 2023 15:55:34 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 64482
 by: Amine Moulay Ramdane - Fri, 16 Jun 2023 15:55 UTC

Hello,

More of my philosophy about the abstraction of the language and about programming and about the most important problem of artificial intelligence and about Reinforcement learning and about collective intelligence and about my views on super-intelligent AI and about the danger of super-intelligent AI and about my new proverb and about the attention mechanisms in Transformers and about the good accuracy and about the hybrid systems in AI and about logical reasoning of Large Language Models such as GPT-4 and about evolutionary algorithms and about GPT-4 and about common sense and nuanced understanding of Large Language Models such as ChatGPT and about my predictions about artificial intelligence and about the other weaknesses of Large Language Models such as GPT-4 and about my abstraction and about the important weakness of Large Language Models and about the quality of Large Language Models such as GPT-4 and about the deeper meaning and about mathematics and about Large Language Models such as GPT-4 and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

So i have just read the following article where Yann LeCun says that Large Language Models such as GPT-4 are not smart , look at it here (And you can translate the article from french to english ):

https://intelligence-artificielle.developpez.com/actu/345604/Le-responsable-de-l-IA-chez-Meta-affirme-que-l-intelligence-artificielle-n-est-pas-encore-aussi-intelligente-qu-un-chien-et-rejette-l-idee-selon-laquelle-les-robots-allaient-s-emparer-du-monde/

So i think that Yann LeCun that is a VP and Chief AI Scientist at Facebook is not correct, since i think that Large Language Models such as GPT-4 capture the patterns of the language, so the language is also an abstract understanding of the real world, but when the abstract understanding of the Large Language Model like GPT-4 writes to the user of GPT-4, so it becomes that the user understand with the human meaning the abstract language, so it becomes that the abstract understanding makes the Large Language Model like GPT-4 smart, so i think that Large Language Models such as GPT-4 are powerful and smart, also GPT-4 can be creative from the patterns that it has discovered in the training data, so it is limited by the training data, but still , it can be creative from the understanding of the patterns that it has discovered in the data, and of course i am also explaining in my below previous thoughts that we can enhance the Large Language Models such as GPT-2 by adding evolutionary algorithms, so i invite you to read all my below previous thoughts about Large Language Models so that to understand my views:

So here is my new thoughts on programming:

So in programming so that to not make the system harder to understand, test, and maintain, you have to implement what you need and you have to minimize at best complexity and you should avoid the duplication of code in your application and you should encapsulate data and behavior in your classes and objects, and take advantage of object-oriented programming (OOP) concepts such as inheritance, composition, and polymorphism to create modular, manageable, and organized code, and of course you have to minimize at best coupling and maximize at best cohesion, and you should well document code so that it be much easier to manage, maintain, and debug and you should run unit tests often, and you have to use meaningful names, and of course you should refactor your code regularly by improving code quality since refactoring makes the code far easier to maintain over time.

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, i think there is still a really important problem to solve in artificial intelligence, and it is that the language is only an abstraction of the real world, so when you understand the language or logically infer the patterns from the language, like is doing it GPT-4, those patterns are not the understanding of the real world, and even if we use the hybrid system of both Large Language Models such as GPT-4 and evolutionary algorithms, so it can take too much time to explore with evolutionary algorithms so that to discover new problem-solving strategies or algorithms or even improvements to existing algorithms, so it is not like intelligence of humans, so i think it is why we can say that artificial intelligence will not attain artificial general intelligence and will not attain artificial superintelligence, and i invite you to read my following thoughts that talk about how to solve the problem by understanding consciousness and i invite you to read my following thoughts that talk about my new model that explains human consciousness:

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked more carefully at GPT-4 , and i think that as i have just explained it, that it will become powerful, but it is limited by the data and the quality of the data on wich it has been trained, so if it encounter a new situation to be solved and the solution of it can not be inferred from the data on wich it has been trained, so it will not be capable of solving this new situation, so i think that my new model of what is consciousness is explaining that what is lacking is the meaning from human consciousness that permits to solve the problem, so my new model is explaining that artificial intelligence such as GPT-4 will not attain artificial general intelligence or AGI, but eventhough , i think that artificial intelligence such as GPT-4 will become powerful, so i think that the problematic in artificial intelligence is about the low level layers, so i mean look at assembler programming language, so it is a low level layer than high level programming languages, but you have to notice that the low level layer of assembler programming language can do things that the higher level layer can not do, so for example you can play with the stack registers and low level hardware registers and low level hardware instructions etc. and notice how the low level layer like assembler programming can teach you more about the hardware, since it is really near the hardware, so i think that it is what is happening in artificial intelligence such as the new GPT-4, i mean that GPT-4 is for example trained on data so that to discover patterns that make it more smart, but the problematic is that this layer of how it is trained on the data so that to discover patterns is a high level layer such as the high level programming language, so i think that it is missing the low level layers of what makes the meaning, like the meaning of the past and present and the future or the meaning of space and matter and time.. from what you can construct the bigger meaning of other bigger things, so it is why i think that artificial intelligence will not attain artificial general intelligence or AGI, so i think that what is lacking in artificial intelligence is what is explaining my new model of what is consciousness, so you can read all my following thoughts in the following web link so that to understand my views about it and about different other subjects:

https://groups.google.com/g/alt.culture.morocco/c/QSUWwiwN5yo

So I believe I have smartly covered the subject of the limitations of Large Language Models such as GPT-4, and you can read about it in my below previous thoughts, but now i think there is still a limitation that remains,
and it is that Reinforcement learning from human feedback of GPT-4 ensures an exploration from the discovered patterns in the data on wich it has been trained, so it can with rewards enhance itself, but the rewards come from the judgments of humans that use for example GPT-4 , so it has the same limitation that i am talking about in my below previous thoughts , and it is that ChatGPT cannot guarantee high quality of the professionalism, knowledge, or IQ , of those that make judgments that make the reward in Reinforcement learning from human feedback of GPT-4, so then since there is also the same limitation in the training data as i am explaining below, so i think that you understand from it that it is one more limitation, so i invite you to read all my below previous interesting thoughts so that to understand the other limitations of Large Language Models such as GPT-4:

So I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just looked at the following new video of Mark Zuckerberg's timeline for AGI, and i think that he is talking about collective intelligence that can become superintelligence, so there is not only superintelligence that comes from a model of artificial intelligence like i am explaining below, but there is superintelligence that can come from humans specializing and also using artificial intelligence like Large Language Models such as GPT-4 and interacting in a way that is smart and that creates Superintelligence, so i invite you to look at the following video of Mark Zuckerberg so that to understand his views:

Mark Zuckerberg's timeline for AGI: When will it arrive? | Lex Fridman Podcast Clips

https://www.youtube.com/watch?v=YkSXY4pBAEk

And so that to understand my views about Large Language Models such as GPT-4 and about Superintelligence, i invite you to read my below previous thoughts:

So I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have just posted about how Superintelligent AI may be Impossible to control, but now i will give my views about Superintelligent AI, so i think that the Large Language Models such as GPT-4 will not attain Superintelligence, since
the exploration process of Large Language Models such as GPT-4 is limited by the training data, so what we need is the hybrid model of both Large Language Models such as GPT-4 and evolutionary algorithms that can explore much more beyond the training data and that can discover or invent new algorithms etc., so then by reading my below previous thoughts you will understand that it can take time to solve this problems, so i invite you to read all my below previous thoughts about the limitations of the Large Language Models such as GPT-4:


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about the abstraction of the language and about programming and about the most important problem of artificial intelligence and about Reinforcement learning and about collective intelligence and about my views on super-intellige

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor