Will the singularity kill us all?
The singularity, a hypothetical point in the future where artificial intelligence (AI) surpasses human intelligence and becomes self-improving, leading to rapid, uncontrollable technological advancements, is a topic of much debate and speculation. Predicting the precise consequences of the singularity is impossible, as it depends on many factors that are currently unknown or uncertain.

Some experts argue that the singularity could lead to a utopian future, with AI solving many of humanity's problems and significantly improving our quality of life. Others are more cautious, warning of potential risks associated with advanced AI, such as loss of control, malicious use, or unintended consequences. It's worth noting that even those who are concerned about the risks of the singularity do not necessarily believe it will result in humanity's extinction.

To mitigate potential risks, researchers, policymakers, and organizations are working on developing AI safety measures, ethical guidelines, and governance frameworks. By addressing these concerns proactively, the hope is that we can steer the development of AI in a direction that is beneficial and safe for humanity.

In conclusion, it is not possible to say with certainty whether the singularity will have positive or negative consequences for humanity. The outcome will depend on the choices we make and the safeguards we put in place as AI continues to advance.
Could we see training AI like teaching someone's moral character, like not only letting it cause and consequences but letting the AI understand the implications of such cause and consequences so it can better understand its self and the effects it has on others?
A current method applicable to AI learning to act morally may be via the use of what has been termed “Constitutional AI,” which aims to provide a “principle-based” approach to aligning AI systems with human intentions.  This is achieved by letting an AI supervise other AIs using reinforcement learning from AI feedback (RLAIF).  The only human oversight is provided through a list of rules or principles, or a ‘constitution.’ The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase a sample from an initial model is made, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, samples from the finetuned model are used, then using a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. Then training with RL using the preference model as the reward signal (RLAIF). As a result, a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them is achieved. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels and thus useful for your objective of an AI learning how to act morally. 

Ethics and moral actions can also be modelled using AI, for example see:
 Wiedeman, C., Wang, G. & Kruger, U. Modeling of moral decisions with deep learning. Vis. Comput. Ind. Biomed. Art 3, 27 (2020). https://doi.org/10.1186/s42492-020-00063-9

Finally, AI might also be able to assist humans in their moral decision making as per:
Savulescu, J., Maslen, H. (2015). Moral Enhancement and Artificial Intelligence: Moral AI?. In: Romportl, J., Zackova, E., Kelemen, J. (eds) Beyond Artificial Intelligence. Topics in Intelligent Engineering and Informatics, vol 9. Springer, Cham. https://doi.org/10.1007/978-3-319-09668-1_6