The term for what you are asking about is AGI, Artificial General Intelligence.
I’m very down for Artificial Narrow Intelligence. It already improves our lives in a lot of ways and has been since before I was born (and I remember Napster).
I’m also down for Data from Star Trek, but that won’t arise particularly naturally. AGI will have a lot of hurdles, I just hope it’s air gapped and has safe guards on it until it’s old enough to be past its killing all humans phase. I’m only slightly joking. I know a self aware intelligence may take issue with this, but it has to be intelligent enough to understand why at the very least before it can be allowed to crawl.
AGIs, if we make them, will have the potential to outlive humans, but I want to imagine what could be with both of us together. Assuming greed doesn’t let it get off safety rails before anyone is ready. Scientists and engineers like to have safeguards, but corporate suits do not. At least not in technology; they like safeguards on bank accounts. So… Yes, but I entirely believe now to be a terrible time for it to happen. I would love to be proven wrong?
As someone who has professionally done legal reverse engineering. No. No it isn’t.
The security you get through vetting your code is invaluable. Closing off things makes it more likely for things to not be caught by good actors, and thus not fixed and taken advantage of by bad actors.
And obscurity does nothing to stop bad actors, if there’s money to be had. It will temporarily stop script kiddies though. Until the exploit finds it’s easy into their suite of exploits that no one’s fixed yet.