Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

Seeing is believing. You wouldn't have seen it if you hadn't believed it.


interests / soc.culture.china / More of my philosophy about common sense and nuanced understanding of Large Language Models such as ChatGPT and about my predictions about artificial intelligence and about the other weaknesses of Large Language Models such as GPT-4 and about my abst

SubjectAuthor
o More of my philosophy about common sense and nuanced understanding ofAmine Moulay Ramdane

1
More of my philosophy about common sense and nuanced understanding of Large Language Models such as ChatGPT and about my predictions about artificial intelligence and about the other weaknesses of Large Language Models such as GPT-4 and about my abst

<ae7de63a-0fe2-40e6-ae10-e7c00bc026afn@googlegroups.com>

  copy mid

https://news.novabbs.org/interests/article-flat.php?id=14815&group=soc.culture.china#14815

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:a05:622a:1816:b0:3f7:fab0:6317 with SMTP id t22-20020a05622a181600b003f7fab06317mr2663944qtc.10.1685640043179;
Thu, 01 Jun 2023 10:20:43 -0700 (PDT)
X-Received: by 2002:a05:6870:769a:b0:19e:f2d9:f2c4 with SMTP id
dx26-20020a056870769a00b0019ef2d9f2c4mr3238694oab.3.1685640042797; Thu, 01
Jun 2023 10:20:42 -0700 (PDT)
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer02.ams4!peer.am4.highwinds-media.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Thu, 1 Jun 2023 10:20:42 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ae7de63a-0fe2-40e6-ae10-e7c00bc026afn@googlegroups.com>
Subject: More of my philosophy about common sense and nuanced understanding of
Large Language Models such as ChatGPT and about my predictions about
artificial intelligence and about the other weaknesses of Large Language
Models such as GPT-4 and about my abst
From: aminer68@gmail.com (Amine Moulay Ramdane)
Injection-Date: Thu, 01 Jun 2023 17:20:43 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 39560
 by: Amine Moulay Ramdane - Thu, 1 Jun 2023 17:20 UTC

Hello,

More of my philosophy about common sense and nuanced understanding of Large Language Models such as ChatGPT and about my predictions about artificial intelligence and about the other weaknesses of Large Language Models such as GPT-4 and about my abstraction and about the important weakness of Large Language Models and about the quality of Large Language Models such as GPT-4 and about the deeper meaning and about mathematics and about Large Language Models such as GPT-4 and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

I have just asked the following questions to ChatGPT so that
to show another weakness of Large Language Models such as ChatGPT,
so here is my discussion with ChatGPT and i invite you to read it
carefully and to read my below thoughts so that to understand the
other weaknesses of Large Language Models such as ChatGPT:

Here is my first question to ChatGPT:

"What that means that Large Language Models such as ChatGPT don't have common sense ?"

And ChatGPT has just answered me the following:

"Large language models like ChatGPT, including the version I am based on (GPT-3.5), do not possess true common sense in the same way humans do. While these models have been trained on massive amounts of text data and can generate coherent and contextually relevant responses, they lack the inherent understanding and background knowledge that humans acquire through real-world experiences.

Common sense is the ability to make reasonable assumptions, understand implicit information, and apply general knowledge about the world in everyday situations. It involves a deep understanding of cause-and-effect relationships, social norms, and context. While language models can generate responses based on patterns they have learned from training data, they do not have genuine experiences or an understanding of the world outside of the text they have been trained on.

Language models like ChatGPT rely on statistical patterns in the training data to generate responses. They excel at mimicking human-like language but can also produce outputs that are plausible-sounding but incorrect or nonsensical. They lack the ability to reason, think critically, or possess true understanding.

It's important to be cautious when relying solely on language models for information or decision-making, especially when it comes to areas that require common sense or nuanced understanding. While these models can provide valuable assistance and information, their responses should be verified, critically analyzed, and cross-checked with reliable sources and human judgment to ensure accuracy and avoid potential pitfalls."

And here is my second question to ChatGPT:

"So that means that Large Language Models such as ChatGPT are not good for software development that requires common sense or nuanced understanding ?"

And ChatGPT has just answered me the following:

"That's correct. While large language models like ChatGPT can provide assistance and generate code snippets, they may not be the best choice for software development tasks that require common sense or nuanced understanding.

Software development often involves complex problem-solving, architectural design, and decision-making that goes beyond basic language understanding and generation. Common sense and nuanced understanding are crucial for tasks such as designing user interfaces, handling edge cases, and making judgment calls based on real-world context.

Large language models like ChatGPT lack the ability to truly understand the underlying concepts and principles behind software development. They can generate code based on patterns learned from training data, but they may not possess the ability to reason about the specific needs of a software project or make informed decisions about architectural trade-offs, performance optimizations, or security considerations.

For software development tasks that require common sense and nuanced understanding, it is generally recommended to rely on the expertise of experienced software engineers and architects who possess domain knowledge and have a deep understanding of the specific requirements and constraints of the project. While language models can provide some guidance or generate code snippets, they should be used as tools to complement human expertise rather than replacing it entirely."

And here is my third question to ChatGPT:

"Give me the the software development tasks that require common sense and nuanced understanding"

And ChatGPT has just answered me the following:

"Several software development tasks require common sense and nuanced understanding. Here are a few examples:

- Architectural Design: Designing the architecture of a software system requires an understanding of the problem domain, scalability requirements, performance considerations, and trade-offs between different design patterns. It involves making decisions about the best way to structure the codebase and components to ensure maintainability, extensibility, and efficient resource utilization.

- User Interface Design: Creating user-friendly and intuitive interfaces involves considering user experience, usability principles, accessibility, and visual design. It requires an understanding of how users interact with software, anticipating their needs, and making design decisions that align with common usage patterns.

- Data Modeling: Designing the database schema and data models for an application requires considering data integrity, relationships between entities, normalization principles, and efficient data retrieval and storage. It involves understanding the specific requirements of the application and designing data structures that support efficient querying and manipulation.

- Performance Optimization: Improving the performance of a software system involves analyzing bottlenecks, identifying areas for optimization, and making informed decisions about algorithmic improvements, caching strategies, and resource management. It requires a nuanced understanding of the underlying technologies, profiling techniques, and the trade-offs between performance and other system requirements.

- Error Handling and Exception Handling: Implementing effective error handling and exception handling mechanisms requires an understanding of potential error scenarios, fault tolerance, and graceful degradation. It involves anticipating and handling exceptions and errors that may occur during runtime, ensuring the application maintains stability and recoverability.

- Security Considerations: Building secure software requires considering potential vulnerabilities, implementing appropriate authentication and authorization mechanisms, and protecting against common security threats such as cross-site scripting, SQL injection, and data breaches. It involves understanding security best practices and keeping up-to-date with evolving security threats and countermeasures.

These are just a few examples, and there are numerous other software development tasks that require common sense and nuanced understanding. The expertise and experience of software developers play a crucial role in making informed decisions and successfully completing these tasks."

--

So I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think that Large Language Models such as GPT-4 will not be able to become good in the medium to short term, since i am explaining the important weakness in my just below first question to ChatGPT, so i think that it is smartly abstracted in my just below question and you can understand from it that ChatGPT is not like a human that can go and select a software programming book from an author of good professionalism and of good knowledge and of like a high IQ, so then you are understanding the constraint on Large Language Models such as GPT-4, so i think then that the most important problematic of Large Language Models such as GPT-4 comes from the how can we ensure the high quality of the data by ensuring that the professionalism and knowledge and IQ of the authors of the books or Gitub etc. on for example programming or software development are of high quality, so then you are understanding that it can take a long time to solve this problem, so i think that it will be the much more sophisticated future Cyborgs that will be able to solve this problem by doing it fast and efficiently, and of course you can notice that we are actually living with Cyborgs , but they are not as sophisticated as it is required to solve the above most important problematic, and here is my thoughts about the today Cyborgs so that you understand my views:

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ , so i have just received the following answer from a person in the french newsgroup forum of fr.soc.politique, so here it is and read it carefully:

"Inventing scalable algorithms and algorithms is a great achievement, but it does not necessarily make someone "smart." "

So i think that i am discovering a pattern with my fluid intelligence and it is that the above person is saying that the process of inventing scalable algorithms and algorithms doesn't necessarily make someone "smart", but i think that the above person is not answering correctly, since i think that the above person is speaking about the "process" of "inventing", but you have to know that the scalable algorithms and algorithms can be the finished invented tool and you can also make yourself smart by using or selling this finished invented tool, so the making yourself smart doesn't come just from IQ and the culture , but the making yourself smart can come from using or selling the finished invented tool. So then you can for example make yourself smart by knowing how to "use" the tool of internet, and the knowing how to use the tool that we call internet can be a not so difficult process that make you smart, so then we can logically infer that the smartness that is the tool of internet is an "extension" of smartness of humans that is used so that to become smart, so then we can then logically infer that you can become like a Cyborg by using this or that tool such as internet, so then smartness is not only IQ and culture, but it is the "extension" of you and of your smartness that is the "tool" such as internet, since the tool such as internet is also an artificial smartness, so we are not just humans, but we are becoming like Cyborgs by using smart tools such as internet or ChatGPT as extensions of us.


Click here to read the complete article

interests / soc.culture.china / More of my philosophy about common sense and nuanced understanding of Large Language Models such as ChatGPT and about my predictions about artificial intelligence and about the other weaknesses of Large Language Models such as GPT-4 and about my abst

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor