• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • You very much gloss over the whole “distribution” part. That is one of the main three segments of an electric grid (generation, transmission, distribution). Practical Engineering has some great content about how the grid works and addresses some of the problems renewables face in certain aspects iirc. I recommend giving it a watch or at least a background listen. His first video that is a good place to start, and the “which power plant does my electricity come from” with the lake analogy is also a good intro.

    https://youtube.com/playlist?list=PLTZM4MrZKfW-ftqKGSbO-DwDiOGqNmq53

    Having a DER system is great and all because the transmission system doesn’t have to be as highly loaded (thus increasing the total load a system can withstand), but you still need to be pretty connected for something like this to work - and like others have pointed out, that’s going to mean building a parallel grid (which the energy regulators won’t like if you get too big) or hooking into the existing grid (which probably already has DER management baked into the system if you contact your local power company).

    The grid works because it’s big. That’s a feature, not a bug. And because we have AC not DC on the wire, any energized and connected generator has to be in dead lockstep with the grid frequency or else your hardware is going to become a load, make expensive noises, emit magic smoke, or some combination thereof.

    One major edge case you have is night charging of EVs. Let’s say I’m a 9-5 office worker with a standard parking lot at my workplace. I’m just a keyboard monkey doing whatever, so I’m not a decision maker as to what goes in the parking lot infrastructure wise, so I’m at the mercy of whatever Facilities is doing, and gods know what that is. But I have a nice brand new EV, and I want to charge it. When I drive home after DST ends, it’s dark outside. There’s no solar to charge my car. Some renewables (like wind and hydro) work at night, but solar doesn’t. I’d need to charge an auxillary power storage system during the day, and then transfer that to my EV battery at night. That’s more complexity.

    Power storage of any kind of generation is a huge issue with many different solutions, and not all of them are batteries. And nothing is a perfect system, so there’s energy losses whenever we convert from type A to type B of whatever.

    Or… I could just hook my EV up to the grid where the cost of my bill per kilowatt hour includes systems and people to manage keeping the system on voltage and on frequency, 24/7/365.25.

    Any power produced during that day for a solar system that doesn’t get immediately used needs to be stored (because it HAS to get put somewhere or you literally break the grid or waste it). That energy storage - along with the voltage converters - is going to take up extra cubic footage in your system that won’t be small, and requires regular monitoring and maintenance to stay online. The system you’re proposing is going to create many fragments of the grid in the form of these pop up neighborhood charging stations entirely dependent on what resources are available in less than a mile radius.

    Even if you assume that you don’t have to frequency synchronize with the main grid and you’re fully isolated, you run into another big problem: local generation isn’t always perfect. Solar especially is very susceptible to the giant orb in the sky being around, so your local energy storage needs to account for being able to hold enough power for a certain percentage above your worst case cloudy day while maintaining the necessary output to sustain the local EVs depending on it. If you get a 2- or 3- day storm, I hope you have enough energy storage to have low daytime charge rates for 4- to 5- days. In the playlist, there’s also a video talking about using hydroelectric generators in reverse to store energy as physical potential energy in a reservoir as one example of how a grid might store excess energy.

    This is one thing the major grids are quite literally engineered and regulated to accomplish: because they are in fact so large, they can just import energy via the market system from somewhere with better weather or is slightly off-peak demand. And when one type of energy becomes less viable for a given weather condition (like solar on a cloudy day) they have a diversified generation portfolio of other sources: renewables like wind and hydro, nuclear energy for big orders, and even grid-scale energy storage system such as flywheels (fast stabilization), pumped water storage, and even giant batteries, and if all those fail, well yes we do still have dinosaurs to burn. (The world’s not perfect yet and we should by all means go for progress, but it will be a long road). And all these sources are already working together to keep the grids on voltage and on frequency, and have physical and managerial infrastructure to keep everything connected and synchronized such that supply and demand are balanced.



  • If you’re trying to do VDI in the cloud, that can get expensive fast on account of the GPU processing needed

    Most of the protocols I know of the run CPU-only (and I’m perfectly happy to be proven wrong and introduced to something new) tend to fray at high latency or high resolution. The usual top two I’ve seen are VNC and RDP (XRDP project on Linux), with NoMachine and plain x11 over SSH being right behind that. I think NoMachine had the best performance of those three, but it’s been a hot minute since I’ve personally used it. XRDP is the one I’ve used the most often, but getting login/lock/unlock working was fiddly at first but seems to be stable holding.

    Jumping from the “basic connection, maybe barely but not always suitable for video” to “ultra high grade high speed”, we have Parsec and Sunshine+Moonlight. Parsec is currently limited to only Windows/Mac hosting (with Linux client available), and both Parsec and Sunshine require or recommend a reasonable GPU to handle the encoding stage (although I believe Sunshine may support an X264 encoder which may exert a heavy CPU tax depending on your resolution). The specific problem of sourcing a GPU in the cloud (since you mention EC2) becomes the expensive part. This class of remote access tends to fray at high resolution and frame rate less because it’s designed to transport video and games, rather than taking shortcuts to get a minimum desktop visible.





  • I think you’re asking too much from ZFS. Ceph, Gluster, or some other form of cluster native filesystem (GFS, OCFS, Lustre, etc) would handle all of the replication/writes atomically in the background instead of having replication run as a post processor on top of an existing storage solution.

    You specifically mention a gap window - that gap window is not a bug, it’s a feature of using a replication timer, even if it’s based on an atomic snapshot. The only way to get around that gap is to use different tech. In this case, all of those above options have the ability to replicate data whenever the VM/CT makes a file I/O - and the workload won’t get a write acknowledgement until the replication has completed successfully. As far as the workload is concerned, the write just takes a few extra milliseconds compared to pure local storage (which many workloads don’t actually care about)

    I’ve personally been working on a project to convert my lab from ESXi vSAN to PVE+Ceph, and conversions like that (even a simpler one like PVE+ZFS to PVE+Ceph would require the target disk to be wiped at some point in the process.

    You could try temporarily storing your data on an external hard drive via USB, or if you can get your workloads into a quiet state or maintenance window, you could use the replication you already have and rebuild the disk (but not the PVE OS itself) one node at a time, and restore/migrate the workload to the new Ceph target as it’s completed.

    On paper, (I have not yet personally tested this), you could even take it a step farther: for all of your VMs that connect to the NFS share for their data, you could replace that NFS container (a single point of failure) with the cluster storage engine itself. There’s not a rule I know of that says you can’t. That way, your VM data is directly written to the engine at a lower latency than VM -> NFS -> ZFS/Ceph/etc



  • My server rack has

    • 3x Dell R730
    • 1x Dell R720
    • 2x Cisco Catalyst 3750x (IP Routing license)
    • 2x Netgear M4300-12x12f
    • 1x Unifi USW-48-Pro
    • 1x USW-Agg
    • 3x Framework 11th Gen (future cluster)
    • 1x Protectli FE4B

    All together that draws… 0.1 kWh… in 0.327s.

    In real time terms, measured at the UPS, I have a running stable state load of 900-1100w depending on what I have at load. I call it my computationally efficient space heater because it generates more heat than is required for my apartment in winter except for the coldest of days. It has a dedicated 120v 15A circuit