• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • ricecake@sh.itjust.workstolinuxmemes@lemmy.worldKinda sus...
    link
    fedilink
    arrow-up
    114
    arrow-down
    1
    ·
    2 days ago

    While they created a set of patches that would implement the security features that selinux provides, what was actually merged was the result of several years of open collaboration and development towards implementing those features.

    There’s general agreement that the idea that the NSA proposed is good and an improvement, but there was, and still is, disagreement about the specific implementation approaches.
    To avoid issues, an approach was taken to create a more generic system that selinux would then take advantage of. That’s why selinux, app armor and others can live side by without it being a constant maintenance and security nightmare. Each one lives in their little self contained auditable boxes, and the kernel just makes the “check authorization” function call and it flows into the right module by configuration.

    The Linux community was pretty paranoid about the NSA in 2000, so the code definitely got a lot more scrutiny than the typical proposal.

    A much easier way to introduce a backdoor would be to start a tiny company that produces some arbitrary piece of hardware which you then add kernel support for.

    https://github.com/torvalds/linux/tree/master/drivers/input/keyboard - that’s just the keyboard drivers.

    Now you’re adding code to the kernel and with the right driver and development ability you can plausibly make changes that have non-obvious impacts, and as a bonus if someone notices, you can just say “oops!” And not be “the god-damned NSA” who everyone expects to be up to something, and instead be 4 humble keyboard enthusiasts with an esoteric set of lighting and input opinions like are a dime a dozen on Kickstarter.



  • I think part of it’s that not all propaganda is bad.

    There’s probably a term for it, but I’d draw a distinction between “opinion” propaganda and “aspirational” propaganda.

    One tries to change your opinion of something, like “cops are good noble and always do the right thing”.
    The other encourages the viewer to live up to some ideal. It’s entirely possible for that ideal to also not be great, but even then “I should be” is better than “they are”.

    A lot of PSAs and things from the ad council fall in the later category. Like the billboards that basically say “real men are present and emotionally available fathers to their children” or "good parents teach their kids healthy diet and exercise by example”.
    They’re openly cases of the government trying to change public opinions or attitudes (which arguably makes them better examples of propaganda than a lot of commercial television), but they don’t feel as objectionable.

    “This honest and kind man who always tries to do good and help those around him to the point that it overshadows him being a physically perfect human is the embodiment of the emblematic American man” is more in that aspirational category.




  • Fair enough. You’d be surprised how many people don’t know you need clean them occasionally and think it’s normal for stuff to go terribly wrong really quickly. :)

    I got a new washer relatively recently and it’s quiet enough that it’s not really audible from the next room unless you tell it to do a really aggressive spin cycle with a big load.

    In any case, I think the point of the timed wash features are to make it so your laundry finishes Right when you get home rather than overnight.


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    10 days ago

    Yeah, I know how it works, and I also know how different types of AI work.

    It’s a field from the 50s concerned with making systems that perceive their environment and change how they execute their tasks based on those perceptions to maximize the fulfillment of their task.

    Yes, all modern laundry machines utilize AI techniques involving interpolation of sensor readings into a lookup table to pick wash parameters more intelligently.

    You’ve let sci-fi notions of what AI is get you mad at a marketing department for realizing that we’re back to being able to label AI stuff correctly.


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    10 days ago

    I love it when people angrily declare that something AI researchers figured out in the 60s can’t be AI because it involves algorithms.

    Using an algorithm to take a set of continuous input variables and map them to a set of continuous output variables in a way that maximizes result quality is an AI algorithm, even if it’s using a precomputed lookup table.

    AI has been a field since the 1950s. Not every technique for measuring the environment and acting on it needs to be some advanced deep learning model for it to be a product of AI research.


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    10 days ago

    A notification that your load is done is actually convenient. It’s typically also paired with some sensors that can let you know if you need more detergent or to run a cleaning cycle on the washer.
    Mine also lets you set the wash parameters via the app if you want, which is helpful for people who benefit from the accessibility features of the phone. Difficult to adjust the font size or contrast on a washing machine, or hear it’s chime if you have hearing problems.




  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    10 days ago

    You can’t see a benefit to a washing machine that can wash clothes without you needing to figure out how much soap to add or how many rinse cycles it needs?

    I genuinely pity anyone so influenced by marketing that they can’t look at what a feature actually does before deciding they hate it.


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    10 days ago

    Well that’s sort of my point. It’s an algorithm, or set of techniques for making one, that’s been around since the 50s. Being around for a long time doesn’t make it not part of the field of AI.

    The field of AI has a long history of the fruits of their research being called “not AI” as soon as it finds practical applications.

    The system is taking measurements of its problem area. It’s then altering its behavior to produce a more optimal result given those measurements. That’s what intelligence is. It’s far from the most clever intelligence, and it doesn’t engage in reason or have the ability to learn.

    In the last iteration of the AI marketing cycle companies explicitly stopped calling things AI even when it was. Much like how in the next 5-10 years or so we won’t label anything from this generation “AI”, even if something is explicitly using the techniques in a manner that makes sense.


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    10 days ago

    Wouldn’t you know, AI has also been algorithmically based and around since the 1950s?

    AI as a field isn’t just neural networks and GPUs invented in the last decade. It includes a lot of stuff we now consider pretty commonplace.
    Using some simple variables to measure a few continuous values to make decisions about soap quantity, water to dispense, and the number of rinse cycles is pretty much a text book example of classical AI. Environmental perception and changing actions to maximize the quality of its task outcome.

    https://en.wikipedia.org/wiki/AI_effect


  • ricecake@sh.itjust.workstomemes@lemmy.worldAI needs to stop
    link
    fedilink
    arrow-up
    24
    arrow-down
    5
    ·
    10 days ago

    The reassuring thing is that AI actually makes sense in a washing machine. Generative AI doesn’t, but that’s not what they use. AI includes learning models of different sorts. Rolling the drum a few times to get a feel for weight, and using a light sensor to check water clarity after the first time water is added lets it go “that’s a decent amount of not super dirty clothes, so I need to add more water, a little less soap, and a longer spin cycle”.

    They’re definitely jumping on the marketing train, but problems like that do fall under AI.


  • Eh, anything interesting is going to be inside and out of sight. The desert is so big that people aren’t going to be sneaking up on it without you noticing.

    We’re not going to rely on obscurity to keep our research sites secure. People who have worked at similar secure sites report parking at the meeting building, changing into their work coveralls, going through a security screening and then being driven for an hour or two in a bus with blacked out windows to work in a sealed building with no windows before being driven back in similar conditions.

    Using your existing classified development facility has the advantage that you can keep activities at it at a roughly constant level, so anyone watching from a satellite can’t tell if there’s more or less activity that would indicate something interesting. Just make sure that a dozen busses show up every day, regardless of how many people are in them.

    It’s similar to how you can tell the Pentagons level of alert by looking at pizza delivery wait times at off hours on Google maps.