• A few people have been scammed on the site, Only use paypal to pay for items for sale by other members. If they will not use paypal, its likely a scam NEVER SEND E-TRANSFERS OF ANY KIND.

Question ChatGPT

mzflorida

Site Supporter
Joined
Jul 3, 2017
Messages
1,445
Reaction score
1,360
Points
113
Location
Estero, Florida
Visit site
My responsibilities at work require that I consider strategic threats. ChatGPT (artificial intelligence) is on our radar for many reasons. Anyhow, I’m thinking as AI becomes more prevalent, forums like this may become infrequently utilized, and perhaps irrelevant from a technical perspective. For instance, if someone is looking for a torque value on a rear shock on a Yamaha, it will return those results along with the steps to replace the shock, things to be careful of, brands that make shocks for the bike, links to tutorials,etc.

I like the interactions with others here on the forum. I’ve made some friends here, and on other forums. Personal relationships have greater value than any technical advice I’ve ever gained. AI could diminish that social component.

Anyhow, I’m curious to see what others think about the impact AI might have on forums like this, culture in general, or any other thoughts you might have.

I feel AI has a lot of value to offer but also presents a lot of unintended consequences that may be very negative.
 
I have no experience with AI such as the ChatGPT except what my adult children briefly showed me. However, using your specific example of the Yamaha shock, it suggest the AI does a comprehensive gathering of relevant, existing information and presents it in an organized, usable format. I may be wrong about the full scope of the AI in this example, as again, I have little experience with it. Maybe I need to use ChatGPT to explain AI to me.

My initial thoughts on that form of AI, is that it would have little impact on the forum. First, I would think the AI could only present information as accurate as it’s sources. If the Yamaha published spec is incorrect, or the aftermarket shock directions contains errors or misleading procedures, is the AI ”smart” enough to know that and correct the errors? Can the AI sort through pages of existing on-line discussions and accurately extract the correction to those errors? Or do we continue to use the forum to sort through those issues? Honda service manuals, OEM specs, aftermarket procedures, or other on-line human conversations are written by humans and can indeed contain errors and need for enhancement.

And, with all misinformation on the internet, how does AI separate the truth from the fiction? If someone says the worst thing you can do to an engine is let it sit, and you should instead run an engine monthly, just because, does AI know whether that is true or false, and include it or not in it’s response to a query? That question probably already has an answer; I just don’t know it.

Using the thread about the square thingy switch or whatever, would the AI have eliminated the need for that thread altogether and immediately presented the correct part number the first time someone ever asked? I don’t know.

Secondly, there are times when, for example, a quest for a torque spec is indeed a challenge for the group because the answer is elusive. But other times it’s just spoon feeding someone who does want to bother looking for it or pay for a service manual. If the AI reduces the forum spoon feeding, which is a small part of it’s function, that’s OK, and I don’t see the AI reducing the social value of the forum.
 
Lots of great observations. The idea is that the AI learning is supervised to return accurate results before it has the ability to “think” on its own. Therein lies the problem. Bias of the supervision that informs the intelligence can skew future results. So if the bias in the supervision stage informs the automation that cheeseburgers taste bad, then that presents in the basis of the logic for unsupervised learning. The same bias could theoretically be applied to Harley being better than Indian, equal, worse than, etc. Ideally, AI would be free of bias and present matters of fact. It’s not there yet.

Regarding the switch part number, the idea is that if this forum was a trusted source that it would return results that inform the consumer that part 1 is correct, part 2 can be used but only with modification. Then it would also look to the advice that you provided regarding frame grounding to guide the consumer with best practices.

I’m in the queue to get access to ChatGPT. When it’s granted we can test it out. I have access at work but could not use it for anything here. It will be interesting in any case.

Edit: this is my understanding of how it works. I’m not a tech guy per se; I manage fraud, negligence, safety and security, and misconduct.
 
Back
Top