Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • This post will probably get taken down, it doesn’t belong to AskLemmy. You might want !selfhosted@lemmy.world or one of the programming communities like !webdev@programming.dev.

    That said, it’s fairly easy to just rent out a cheap VPS for like $5 to get started, get NGINX, MariaDB and PHP running on it and then install Wordpress or Drupal.

    I personally would wait for the Wordpress drama to settle before commiting to that platform.

    The problem with hosting services dedicated to say, Wordpress, is the lack of control. If you need other apps to run you have to pay for another service, whereas your own VPS/server you can do whatever you want. Need ElasticSearch for something else? Sure, no problem, as long as the server is big enough.


  • To kind of visually see it, I found this thread of some guy that took oscilloscope captures of the output of their UPS and they’re all pseudo-sines: https://forums.anandtech.com/threads/so-i-bought-an-oscilloscope.2413789/

    As you can see, the power isn’t very smooth at all. It’s good enough for a lot of use cases and lower end power supplies, because they just shove that into a bridge rectifier and capacitors. Higher end power supplies have tighter margins, and are also more likely to have more safety features to protect the PC so they can get into protection mode and shut off. Because bad power can mean dips in power to the system which can cause calculation errors which is very undesirable especially in on a server. It probably also messes with power factor correction circuits, which is something cheap PSUs often cheap out on but a good high quality one would have and may shut down because of it.

    As you can see in those images too, it spends a significant amount of time at 0V (no power, that’s at the middle of the screen) whereas the sine waves spends an infinitely short time at 0, it goes positive and then negative immediately. All the time spent at 0, you rely on big capacitors in the PSU to hold enough charge to make it to the next burst of power. With the sine wave they’d hold just long enough (we’re going down to 12V and 5V from 120/240V input, so the amount of time normally spent at or below ±12V is actually fairly short).

    It’s technically the same average power, so most devices don’t really care. It really depends on the design of the particular unit, some can deal with some really bad power inputs and manage just fine and some will get damaged over long term use. Old linear ones with an AC transformer on the input in particular can be unhappy because of magnetic field saturation and other crazy inductor shenanigans.

    Pure sine UPSes are better because they’re basically the same as what comes out of the wall outlet. Line interactive ones are even better because they’re ready to take over the moment power goes out and exactly at the same spot in the sine wave so the jitter isn’t quite as bad during the transition. Double conversion is the top tier because they always run off the battery, so there’s no interruption for the connected computer at all. Losing power just means the battery isn’t being charged/kept topped off from the wall anymore so it starts discharging.





  • I would probably just skip the Lemmy Easy Deploy and just do a regular deployment so it doesn’t mess with your existing. Getting it running with just Docker is not that much harder and you just need to point your NGINX to it. Easy Deploy kind of assumes it’s got the whole machine for itself so it’ll try to bind on the same ports as your existing NGINX, so does the official Ansible as well.

    You really just need a postgres instance, the backend, pictrs, the frontend and some NGINX glue to make it work. I recommend stealing the files from the official Ansible, as there’s a few gotchas in the NGINX config as the frontend and backend share the same host and one is just layered on top.



  • One thing that is very slightly harder to do on the fediverse is a bot would only see the communities at least one of the instance’s users is subscribed to, so it’s not trivial to make one that sees everything out of the box. Easy enough to fix with a few API calls to recursively discover most instances though.

    And there are bots: there’s the Media Bias Fact Check bot, there’s the PipedVideo bot that posts piped/invidious links for any YouTube link. I have both of those blocked because I don’t care (and Tesseract has both built-in anyway).

    I think culturally stuff like remind me is too noisy/unnecessarily spammy, so if I were to implement such a thing I’d do it directly into the UI so you don’t have to (ab)use comments for that. The haiku bot I blocked a long time ago because it’s just kinda noisy, it’s fun for a while but eventually it gets annoying. As an admin I also think about resource usage, it’s not just wasting big VC funded companies money, it’s wasting people like me’s money too.

    The fediverse opens up a lot of possibilities that allows things to be done cleanly without bots and spam. As a user you’re free to use any UI/frontend you want and still access all the available features, unlike Reddit, well before they killed off third party apps completely.


  • If going the emulation route and willing to use a different kind of gun, you just need to feed the XY coordinates to the emulator. Duck Hunt with a Wiimote is pretty trivial to implement. Once the emulator knows the XY coordinates it can just feed the correct inputs to the game no matter the latency of the gun: if the emulated screen is a white pixel at that coordinate, then the controller port gets a one, and circumvents the entire problem. It can probably be done on original hardware too with an RP2040 to intercept the video signal and inject the correct input, as long as it knows the coordinates before sending the fire input to the console.

    To make work on the original gun, the emulator could rewind and replay with the correct inputs fast enough to be completely invisible to the user, we already do that for netplay over the Internet. Play once, process the gun input, rewind and replay with the correct inputs (in the background) and then continue at the exact frame we rewound and it’s completely invisible except a tiny bit of rubberbanding.

    Original console and original gun though, it’s tricky but if we could frame quadruple the thing to 240Hz and use an OLED with zero input lag, we could theorethically have it displayed in time for the console to be able to read it with the light gun by vblank. The tolerances of that would be insane.


  • The light gun doesn’t need a CRT per-se, but rather the lack of input latency. Games usually flash the targets one per frame, and then it knows based on light level if you were pointing at the target and which one. If it sees light on the third frame then the target must be the third one it flashed. That requires the console to be able to read the light level basically immediately after the console’s done scanning out the image but before the phorphor fades out, so the timing is very tight.

    If we made OLEDs with direct scanout/zero latency, the guns would work just fine. But because of scalers and filters there’s usually at least one frame of latency which means at best you’re one target off, or it thinks you’re cheating and registers a miss (games usually do a full frame of black first to see if the gun’s pointed at a light source, which if you have a frame of latency on the screen it’ll register the last frame which will be bright and thus register a cheat/miss).

    Add just a frame of latency to a CRT and it’ll stop working there too. Later progressive scan CRTs that buffer two frames to deinterlace the signal also don’t work with the light guns.

    Here it is in action: https://youtu.be/V6XnSvB34y8




  • Maybe it can be hacked together with Syncthing: have your phone’s camera sync with an inbox folder on the desktop, have the desktop pick up the files and transcode them with handbrake, then move the original out of the inbox. This will cause Syncthing to sync the deletion back to your phone, and sync the transcoded version back on your phone.

    I’d also check if you can just change the bitrate in your camera app’s settings in case there’s a way to lower the quality there. Could be noticeable, could be just as good as handbrake, never know with hardware encoding.