I’m gay

  • 73 Posts
  • 283 Comments
Joined 1Y ago
cake
Cake day: Jan 28, 2022

help-circle
rss

Fantastic, thank you for sharing this. None of this surprises me as I keep up to date on AI and ethical concerns, but I’m glad it’s receiving more attention.




Thank you for sharing this. You’re absolutely right that its not up to you to educate others. In fact, the concept of educational burden is often brought up when we talk about minorities. If someone unknowingly does something racist or sexist, they often push back and ask for an explanation from the affected party. This is a burden they are placing on others, because they have not educated themselves. But this is also misplaced, because they are the one causing harm and they are usually the person in the position of power or the person who is in a place of privilege.


People mixing their own pre-workout often make this mistake and drop in a tbs or more of caffeine which can and often does kill people. Caffeine is a risky substance when you utilize it in a purified form - a risk with many drugs where the active dose is so small.


I think a focus on the source of the misinformation is misplaced

It’s the power of that source to generate misinfo at a faster speed and for close to no cost that’s a more pressing issue here.

I don’t think this is particularly likely to happen, but imagine I use a LLM to create legal documents to spin up non-profit companies for very little cost, I hire a single lawyer to just file these documents without looking at them and only review if they get rejected. I could create an entire network of fake reporting companies fairly easily. I can then have a LLM write up a bunch of fake news, post it to websites for these fake reporting companies, and embed an additional layer of reporting on top of the reporting to make it seem legit. Perhaps some of the reports are actually twitter bots, Instagram bots, etc. spinning up images with false info on it, and paying for bot farms to surface these posts enough for them to catch on and spread naturally on outrage or political content alone. This kind of reporting might seem above-board enough to actually make it to some reporting websites which in turn could cause it to show up in major media. This could end up with real people creating Wikipedia pages or updating existing information on the internet and sourcing these entirely manufactured stories. While there are some outlets out there who do their research and there are places which fact check or might question these sources, imagine I’m able to absolutely flood the internet with this. At what point of all total reporting/sharing/news/tweeting/youtubing/tiktoking/etc does this become something which our system can actually support investigating?

I also think it’s important to consider the human element - imagine I am an actor interested in spreading misinformation and I have access to a LLM. I can outsource the bulk of my writing to this LLM - I can simply tell it to write a piece about something I wish to spread, and then review it as a human and make minor tweaks to the phrasing, combine multiple responses, or otherwise use it as a fast synthesis engine. I now have more time to spread this misinformation online, meaning that I can reach more venues and create misinformation much quicker than I could previously. This is also a potential vector through which misinformation can be spread more quickly through the use of LLMs. In fact, I’m positive this vector is already being used by many.

However none of that touches on what I think is the most pressing issue of all, the use of AI outside it’s scope and a fundamental misunderstanding of inherent bias in systemic structure. I’ve seen cases where AI was used to determine when people should or shouldn’t receive governmental assistance. I’ve seen AI used to flag when people should be audited. I’ve seen AI used by police to determine who’s likely to commit a crime. Language models aren’t regularly used at policy scale, but language models also have deeply problematic biases. I think we need to be rethinking when AI is appropriate and the limitations of it and to consider the ethical implications during the very design of the model itself or we’re going to have far reaching consequences which will simply amplify existing systemic biases by reinforcing them in their application. Imagine that we trained a model on IRS audits and used it to determine whether someone deserved an audit. We’d end up with an even more racist system than we currently have. We need to stop the over-application of AI because we often have a fundamental misunderstanding of scope, reach, and the very systems we are training them on.


Why do you think that I perceive chatgpt in this way? I voiced an opinion about the biases that chatgpt and most AI have due to their large training sets which reflect systemic biases.


Why do you ask this question?


Can you help me understand what you mean by propaganda device?



Unfortunately, AI’s typical problem with biases, in particular those towards certain minorities which are discriminated against online, did not warrant making this release. It only gets a tiny mention under limitations:

GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.




I don’t think the criticism is meant for every homeowner, it’s specific to the treatment of housing as only a means to make money off of, which is not what you’re doing here.


This will never end up happening, because big business has its hands in every government, but tracking of any sort really needs to be opt-in, rather than opt-out. In California, for example, this is how it works for companies which like to send out those “we want to share information with our business partners” emails, documents, etc. If you are a resident in California and do not reply, by law the company must assume that you opted-out.










New version bugs - Language undetermined error, Subscribed/local/all not defaulting
If you haven't set a language in your profile and you try to post, the default option is "undetermined" and anything you try to reply/post will give you the unhelpful error of language_not_allowed. To an end user this doesn't provide any guidance on what happened or how to fix it. Similarly if you haven't set a new default since updating, going to the main page of an instance will show whatever your previously saved option was among the options subscribed, local, all but it will always show all (since that is what it defaults to on your profile).
fedilink

The paradox of tolerance is a good portion of the reasoning for this instance to exist, but we aren’t just talking about the gradual shift to intolerance here. Here, the framing is based around nice people not sticking around at places that aren’t nice - while tolerance comes to play here, even tolerant but judgemental behavior may need conversations or actions to be taken depending on how events play out.

Ultimately we strive to be higher touch than most moderation because people feel better when they’re included in the conversation and we want to promote a nice environment which can be tricky when people disagree.


I’m having a lot of trouble trying to find the CAT-SEB (Cognitive Analytic Therapy-Swedish Enlistment Battery) which appears to be the IQ test of choice used here. Does anyone happen to have a link to the questions or a sample or some kind of idea of what they’re actually testing?


JKR uses a lot of her time and money to further the TERF agenda. She even proudly considers herself a TERF. The new harry potter game is going to generate a decent amount of revenue for her, which means its directly funding a hateful ideology.

Some queer people and allies have decided to fight against this however they could, which meant that people would hurl insults at people who talked about the game, post memes which spoiled the main plot of the game, and really anything they had control over. It’s been a bit of a nightmare for moderation if you didn’t decide to take a side in the matter.


Okay so I need to be sure I have something that can make sense of s3 calls to storage, I feel like we’re getting closer, just still way out of my own technological depth.


Is there any way to do this and avoid having to use S3? I don’t want a surprise bill from Amazon because we exceeded some thresholds they have on the free tier (nor do I want to have to make new free tiers every 12 months).


Great article, thanks! Completely unsurprising, but I’m glad that issues like this are being surfaced through mediums in which they will receive attention, because these companies certainly aren’t proactively trying to identify and fix these kinds of issues.



I am willing to contribute storage (I have several TB), but I am somewhat bandwidth limited, so I need to be a bit careful with hosting too many images to not impact the other services that I run on the same connection.

How would you accomplish this? I have plenty of bandwidth and plenty of storage I can subsection as a possible solution (hell even buying a raspberry pi and an old hard drive wouldn’t be all that expensive and potentially a fun project) but I really don’t even have an idea of how to connect this to the lemmy instance


If it’s only used for images I’m not all that concerned… images not loading when the rest of the page loads really only matters when the focus of the post is a meme, and I’m not too concerned about those not loading.


Thank you for adding the additional context, hopefully it can help people calibrate how much they should believe this writing.


Honestly it’s kinda fascinating in some extremely weird way…


Of note, there’s no sources to this. To an extent, this is to be expected. Hersh does happen to have a history of breaking a few important stories, but previous stories were backed up by a lot more paperwork than this particular story has.



What is up with the way that article is written? Is this meant to be targeting incels? There’s a weird level of hand-holding tutorial interspersed with sexist ideology about owning a girlfriend. There’s also a weird shift from NFT art of women into trying to find a date in VR? but no mention that you’re trying to interact with a human? If this was written by a human (and not AI) I am very concerned


I find it rather interesting that some of the places most keen on adopting AI are some of the places most plagued by racism. Experts in the field pretty unanimously agree that nearly all AI is racist, so choosing a target system that’s already really racist is just not a good idea.

Unfortunately at the end of the day, capitalism is likely to win. This will likely be sold to police departments in the coming months and years, despite this article or any attention it’s going to receive.


Never heard the term ‘feudal security’ before. Interesting read, thanks!





This is exactly the kind of AI application that is almost assured to happen in financially strained systems, especially systems of government that are chronically underfunded, that are most at risk of causing serious harm because nearly all algorithms are biased and in particular, racist.

This is the use of AI that scares me the most, and I’m glad it’s facing scrutiny. I just hope we put in extremely strong protections ASAP. Sadly, most people in politics do not see how dangerous using AI for these applications can be, so we most likely will see a lot more of this before we see any regulation.

If you’re curious as to why these kinds of applications are nearly all biased, the following quote from the article helps to explain

The Allegheny Family Screening Tool was specifically designed to predict the risk that a child will be placed in foster care in the two years after the family is investigated.

They are comparing variables to an outcome - the outcome is one which is influenced by existing social structures and biases. This is like correlating the risk of ending up in jail with factors which might loosely correlate with race. What will end up happening is that you’ll find the strongest indicators of race, in particular if you are black, and these will also be the strongest indicators of ending up in jail because our system has these biases and jails black individuals at a much higher rate than individuals of other races.

The same is happening here. The chances of a child being placed in foster care depend heavily on the parents race. We are not assessing how well the child is being treated or whether they might need support, we are assessing the risk that the child will be moved to foster care (which can alternatively be read as assessing the likelihood that the child is of a non-white race). This distinction is critical to understand when AI is reinforcing existing biases.


For one there are a lot of real nazi’s on the internet. I thought most of them were fake trolls just shit posting for lulz.

I can see how someone might fall into this trap, but scholars of speech and social phenomenon have been warning of this for quite some time. When an entire scientific field starts to emerge around events happening in real time, it can help to pay attention to what the experts think, even if you disagree, as they can often provide context or insight.

Also you say this after plugging a website with literally nazi in the url? odd

When I thought free speech I thought that doctors, lawyers should be able to give their professional opinions without fear of censorship.

What an interesting take. I’ve never heard of anyone complaining about this. I happen to work in medicine and not law, so I can’t comment on the latter, but I’ve never heard a clinician of any sort (not just doctors but PAs, NPs, etc.) talk about this fear. In fact most medical schools have spent a lot more effort focusing on how to talk with patients in the last 10-20 years because a lack of censorship has caused fractured clinician patient relationships, especially among minorities.

Things get much more complicated though. There appear to be government actors and company actors who try to prevent sites from growing by posting purposefully provocative content.

While I’m sure this is true on a certain level, I doubt anywhere on lemmy warrants the size for this to truly be happening. I would love to see instances of this happening on this platform, however, to understand better how to defend against this kind of behavior.





Psychedelics and dance parties have gone hand in hand for a long time, but I think the burner community has some loose ties (or rather comes from a fascination with) with the hippie community, specific alternative medicine communities, certain spiritual communities and other groups which happen to all have rather nontraditional views on psychedelics and drugs as medicine which they’ve co-opted.

To be fair though, they do a lot of cocaine as well. It’s not the only drug they partake in. But they often think of psychedelics as less of a party drug and more as a mind-expanding experience. In that respect I agree, I think that psychedelics are particularly interesting and useful in a lot of ways and it’s what we’re finding in medicine now that we’re able to study them once again.




cute writing prompt

from my experience with bay area tech burners, many of them like to do psychedelics







Horseshoe theory was never meant to describe political attitudes. Horseshoe makes the classic mistake of confusing economic policy with social in an attempt to oversimplify and classify individuals. Perhaps most importantly, there’s exceedingly little scientific study of horseshoe theory and what little is out there happens to fail to prove the horseshoe theory hypothesis.



At what level do we have control over the images uploaded to beehaw? I know we’ve broached this topic a few times with @dessalines but I don’t remember the outcome. Can we cap how many kb or size in pixels as a stop gap?

Is there anyone who uses our instance that can help to push for more granular control over what can be eating up disk space or can develop scripts to help us manage this? This instance (and likely others) would find scripts for removing old content with no comments, focused on the largest objects first particularly valuable. Something to proactively identify anything taking up a lot of space on the server could help too.




While I’m not going to tell someone how they should enjoy the internet, there are very real storage costs to host images or even create thumbnails of them. Are those the only pictures that you disapprove of? What about vids?


Thank you for the sentiment. Could you explain more what you mean by “mute noise - especially with pictures and vids”?


A few issues I’ve seen with adoption in the federated/open source world-

There is a technical barrier to entry. The fact that you’re on a website that’s connected to other different websites in the same interface is one that people aren’t particularly familiar with. For a social website, questions around moderation and who you’re interacting with are problems which are hard to address if you’re unwilling or incapable of learning the terminology you need to learn to understand how this works.

Each entry point into this website system is slightly different as well - how it presents itself, the design, who participates on that entry point, what kind of discussions exist. You might stumble across a lemmy instance as your first introduction to lemmy that doesn’t appeal to you and not recognize that it’s not everything that’s available on lemmy and discovering that can be difficult. The same is true of other federated websites.

As you mentioned there’s also issues with algorithmic feed. This is what leads a lot of people to stick with a particular platform. They want content to come to them, rather than searching for it, and they aren’t always aware what content they want. Federated content is much more pull oriented than push oriented and until someone codes an algorithm to push I think there will be a lot of resistance with a particular subset of individuals who are interested in pushed content rather than pulled









Re-title a post?
Is there a way to change the title of a post someone else created on a community you moderate? If not, can we please add this functionality
fedilink