Fully agree on the approach - a layered Ai-system architecture has the ability to both simultaneously centralize information sharing (essentially the default is to share data through governed APIs) and decentralize decision-making (providing the data and models at the point of need) that empowers individuals (enhanced human security) and enables effectiveness/efficiency incentives for governments to provide services at the right level. One major drawback to this is by centralizing, you also could enable nefarious actors with the ability to "control" populations - we already know the power of social media to influence human actions because the incentives for monetary gain are there (thanks, Meta, Google, etc.). But I think the positives outweigh the negatives here, and as we get closer to AGI, we need to find ways to leverage the technology to reinforce our better angels and mitigate our human biases so that we enable the majesty of our founding documents to be realized by all.
That's why it's gotta be transparent from the binary level up, and opt in. I think those two give us the best chance for buy in. The target dataset is government regulations so the bias nature of AI should have a small enough dataset to prevent hallucinations. It's kind of like training with 500lbs, and then lifting 10lbs. The subset should be small enough that the LLM trained AI can handle the yes/no nature of a citizen's question.
Fully agree on the approach - a layered Ai-system architecture has the ability to both simultaneously centralize information sharing (essentially the default is to share data through governed APIs) and decentralize decision-making (providing the data and models at the point of need) that empowers individuals (enhanced human security) and enables effectiveness/efficiency incentives for governments to provide services at the right level. One major drawback to this is by centralizing, you also could enable nefarious actors with the ability to "control" populations - we already know the power of social media to influence human actions because the incentives for monetary gain are there (thanks, Meta, Google, etc.). But I think the positives outweigh the negatives here, and as we get closer to AGI, we need to find ways to leverage the technology to reinforce our better angels and mitigate our human biases so that we enable the majesty of our founding documents to be realized by all.
That's why it's gotta be transparent from the binary level up, and opt in. I think those two give us the best chance for buy in. The target dataset is government regulations so the bias nature of AI should have a small enough dataset to prevent hallucinations. It's kind of like training with 500lbs, and then lifting 10lbs. The subset should be small enough that the LLM trained AI can handle the yes/no nature of a citizen's question.