• 5 Posts
  • 30 Comments
Joined 1M ago
cake
Cake day: Oct 23, 2023

help-circle
rss

If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia’s “biggest walking dick” Scott Morrison: Scomo, and active in an Aussie cooking stream.


AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?


No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.


Nope

Best results so far were with a pie where it just warned about possibly burning yourself.


I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D


Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

And that is with a system prompt full of telling the bot that it’s all fantasy.

edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.


Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.


I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D


Sam Altman to return as CEO of OpenAI
[See also twitter](https://twitter.com/openai/status/1727206187077370115): > We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. > We are collaborating to figure out the details. Thank you so much for your patience through this. Seems like the person running the simulation had enough and loaded the earlier quicksave.
fedilink

It’s a substack post. At this point, my quality expectation is

  1. Wordpress - Probably someone who really cares about what they write about
  2. Substack - Either low effort spam like this that gets upvoted for some reason or someone pushing their agenda, hard
  3. Medium - Either spam, wrong, dumb, or too simple. Literally never worth reading.

Heh:

The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.

– https://mastodon.social/@jef/111443214445962022


Prompt:

I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *

Mistral-7B-Instruct-v0.1

  • ntire articles* vs Headlines Descriptions vs Product titles *Bul

GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.

Bigger models are able to handle more complex and detailed tasks with ease

  • Bigger models are better suited for natural language understanding and text processing
  • Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
  • Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
  • Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
  • Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used

Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.

edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense

Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models


Is there anything new in this post that I’m missing?


OpenAI: Gathered Articles from the last few hours (or a Mini-Mega-Thread)
We had a thread about [OpenAI Staff Threaten to Quit Unless Board Resigns](https://www.wired.com/story/openai-staff-walk-protest-sam-altman/), but I thought I might as well add it again. Especially because of this part: > Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place. Okay then. I think we are in a simulation, someone quick saved, and is now experimenting what the outcomes of random decisions are. A minor piece of information was that [OpenAI Approached Anthropic About Merger](https://archive.is/RCQnv), and The Atlkantic has a slightly longer look and speculation what’s going on [Inside the Chaos at OpenAI](https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/). With Ilya’s recent turn around, there’s apparently also the option of Altman coming back [Sam Altman is still trying to return as OpenAI CEO](https://www.theverge.com/2023/11/20/23969586/sam-altman-plotting-return-open-ai-microsoft), something even MS would apparently be okay with, at least publically. Business Analysis blog Stratechery posted some analysis on [OpenAI’s Misalignment and Microsoft’s Gain](https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/). Loving it, this is like SubredditDrama, but without having any actual chance of affecting me (I don’t believe in AGI coming out of LLMs), and on a global scale.
fedilink

Doesn’t really work when none of this was initiated by MS


I do not believe any 7B model comes even close to 3.5 in quality. I used LLama V1 64B, and it was horrible in comparison. Are you really telling me that this tiny model gives better general answers? Or am I just misunderstanding what you are saying?


Oh, faster is easy. GPT 3.5 is also far faster than GPT 4. Faster at quality replies is the issue.



That’s an interface for models. Which model did you use?


I’d say this is an amazing result for MS. Not only is their investment mostly Azure credits, so OpenAI is dependent on MS, now they also got Altman and his followers for themselves for more research.


Microsoft hires former OpenAI CEO Sam Altman and co-founder Greg Brockman
Well, this escalated quickly. So is this the end, or will the mods create an OpenAI megathread? ;)
fedilink

Nothing that runs on my GPU / CPU comes even close to GPT 3.5, GPT4 is not even in the same universe, and that’s with them running far more slowly.


I don’t mind so much what they did with firing him, but how they did it, and everything since. It just seems extremely unprofessional and disorganized.