<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Zenith Blog</title>
        <link>https://zenith.finos.org/blog</link>
        <description>Zenith Blog</description>
        <lastBuildDate>Wed, 24 Jan 2024 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[Microsoft Mesh is Generally Available]]></title>
            <link>https://zenith.finos.org/blog/mesh-is-ga</link>
            <guid>https://zenith.finos.org/blog/mesh-is-ga</guid>
            <pubDate>Wed, 24 Jan 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Microsoft made Mesh generally available. Why this is an important milestone? Mesh is about the importance of deep human connections in the workplace and how they are crucial for increasing engagement, stimulating performance, and retaining talent. It helps in that remote and hybrid work arrangements no longer to pose challenges in building personal connections. According to Microsoft's Work Trend Index report, 43 percent of leaders find relationship building to be a significant challenge in remote and hybrid work - so this problem needed to be addressed. To address this issue, Microsoft has introduced "Microsoft Mesh," a technology that powers 3D immersive experiences to make virtual connections feel more like face-to-face interactions.  Mesh is available on PC and Meta Quest VR devices. Several organizations, including Takeda, Accenture, bp, and Mercy Ships, have already started benefiting from Mesh. Each organization have leveraged Microsoft Mesh to create engaging and productive virtual environments, reducing travel and real estate costs. Each organization has used Mesh for various purposes, from large company-wide events to onboarding and training. Inside Microsoft Teams, Mesh offers 3D immersive spaces for various purposes, such as team social gatherings, brainstorming sessions, and round-table discussions. Users can create avatars to represent themselves, engage in spatial audio-enabled small-group discussions, and utilize familiar Teams features like accessing shared content, chat, and live reactions. If you are an organization, you can also host larger events with custom immersive experiences - a no-code editor allows customization with visual elements like organization logos, videos, and presentation content. Unity integration with the Mesh toolkit enables the creation of custom interactive experiences.]]></description>
            <content:encoded><![CDATA[<p>Microsoft made Mesh <a href="https://www.microsoft.com/en-us/microsoft-365/blog/2024/01/24/bring-virtual-connections-to-life-with-microsoft-mesh-now-generally-available-in-microsoft-teams/" target="_blank" rel="noopener noreferrer">generally available</a>. Why this is an important milestone? Mesh is about the importance of deep human connections in the workplace and how they are crucial for increasing engagement, stimulating performance, and retaining talent. It helps in that remote and hybrid work arrangements no longer to pose challenges in building personal connections. According to Microsoft's Work Trend Index report, 43 percent of leaders find relationship building to be a significant challenge in remote and hybrid work - so this problem needed to be addressed. To address this issue, Microsoft has introduced "Microsoft Mesh," a technology that powers 3D immersive experiences to make virtual connections feel more like face-to-face interactions.  Mesh is available on PC and Meta Quest VR devices. Several organizations, including Takeda, Accenture, bp, and Mercy Ships, have already started benefiting from Mesh. Each organization have leveraged Microsoft Mesh to create engaging and productive virtual environments, reducing travel and real estate costs. Each organization has used Mesh for various purposes, from large company-wide events to onboarding and training. Inside Microsoft Teams, Mesh offers 3D immersive spaces for various purposes, such as team social gatherings, brainstorming sessions, and round-table discussions. Users can create avatars to represent themselves, engage in spatial audio-enabled small-group discussions, and utilize familiar Teams features like accessing shared content, chat, and live reactions. If you are an organization, you can also host larger events with custom immersive experiences - a no-code editor allows customization with visual elements like organization logos, videos, and presentation content. Unity integration with the Mesh toolkit enables the creation of custom interactive experiences.</p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Room-temperature quantum coherence]]></title>
            <link>https://zenith.finos.org/blog/room-quantum-breakthrough</link>
            <guid>https://zenith.finos.org/blog/room-quantum-breakthrough</guid>
            <pubDate>Thu, 11 Jan 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[There has been a breakthrough in achieving quantum coherence of entangled multiexcitons at room temperature, which is crucial for quantum computing and sensing technologies.  The researchers used a chromophore, a dye molecule that can excite electrons with desirable spins, and embedded it in a metal-organic framework (MOF), a nanoporous crystalline material that can suppress molecular motion and preserve quantum coherence.The researchers observed the quantum coherence of a quintet state, a state of four entangled electrons, for over 100 nanoseconds at room temperature by photoexciting the electrons with microwave pulses. This is the first time such a state has been achieved at room temperature. The findings open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds. The researchers plan to search for more suitable guest molecules and MOF structures to generate quintet multiexciton state qubits more efficiently.]]></description>
            <content:encoded><![CDATA[<p>There has been a <a href="https://www.kyushu-u.ac.jp/en/researches/view/274/" target="_blank" rel="noopener noreferrer">breakthrough</a> in achieving quantum coherence of entangled multiexcitons at room temperature, which is crucial for quantum computing and sensing technologies.  The researchers used a chromophore, a dye molecule that can excite electrons with desirable spins, and embedded it in a metal-organic framework (MOF), a nanoporous crystalline material that can suppress molecular motion and preserve quantum coherence.The researchers observed the quantum coherence of a quintet state, a state of four entangled electrons, for over 100 nanoseconds at room temperature by photoexciting the electrons with microwave pulses. This is the first time such a state has been achieved at room temperature. The findings open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds. The researchers plan to search for more suitable guest molecules and MOF structures to generate quintet multiexciton state qubits more efficiently.</p>]]></content:encoded>
            <category>quantum-computing</category>
        </item>
        <item>
            <title><![CDATA[Amazon Introduces New Quantum Chip]]></title>
            <link>https://zenith.finos.org/blog/aws-qec-reinvent-2023</link>
            <guid>https://zenith.finos.org/blog/aws-qec-reinvent-2023</guid>
            <pubDate>Sat, 02 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[One the challenges with leveraging quantum computing stems from the error-prone nature of the basic unit of quantum information, quantum bits, or qubits. By leveraging the unique properties of superposition, simultaneously existing as a state of 0 and 1, as well as entanglement, qubits enable complex computations in fields such as finance. There is a trade off however that, currently, a large amount of effort must go into quantum error correction for this to be useful.]]></description>
            <content:encoded><![CDATA[<p>One the challenges with leveraging quantum computing stems from the error-prone nature of the basic unit of quantum information, quantum bits, or qubits. By leveraging the unique properties of superposition, simultaneously existing as a state of 0 and 1, as well as entanglement, qubits enable complex computations in fields such as finance. There is a trade off however that, currently, a large amount of effort must go into quantum error correction for this to be useful.</p>
<p>Quantum Error Correction (QEC) is a daunting task of mitigating the sensitivity of qubits to noise in the environment. Qubits can experience errors in two dimensions: bit flips, where similarly to classical computing error the qubit’s computational state flips from 1 to 0 or vice versa, and phase flips, where the different possible amplitude states may change. AWS <a href="https://www.forbes.com/sites/craigsmith/2023/11/28/amazon-introduces-new-quantum-chip-to-reduce-errors" target="_blank" rel="noopener noreferrer">introduced at re<!-- -->:Invent<!-- --> 2023</a> a new quantum chip where it was stated "<i>We've been able to suppress errors by 100x by using a passive error correction approach</i>".</p>
<p>The advancements in error rectification and the efficiency of hardware are paving the way for a future where quantum computers can tackle intricate challenges that were once thought to be beyond our reach.</p>
<p>For those interested in exploring Quantum Computring on AWS, there are free <a href="https://explore.skillbuilder.aws/learn/public/learning_plan/view/1986/amazon-braket-badge-knowledge-badge-readiness-path" target="_blank" rel="noopener noreferrer">Braket learning paths</a> on the AWS Skills Builder platform. You can gain hands-on experience with the AWS Quantum ecosystem while earning a digital badge demonstrating the knowledge learned.</p>
<p><img loading="lazy" alt="AWS Quantum Chip" src="https://zenith.finos.org/assets/images/aws-chip-b6ad5d5548201c329807dd8f66704c5e.png" width="625" height="342" class="img_ev3q"></p>]]></content:encoded>
            <category>quantum-computing</category>
        </item>
        <item>
            <title><![CDATA[TCL announces RayNeo Air 2]]></title>
            <link>https://zenith.finos.org/blog/rayneo-air-2</link>
            <guid>https://zenith.finos.org/blog/rayneo-air-2</guid>
            <pubDate>Thu, 16 Nov 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The story started with TCL RayNeo X2 and TCL NXTWEAR S at the beginning of the year, continued by TCL announcing RayNeo Air 2. Yes, you spotted well, they never had a RayNeo Air 1  Although they claim it is lighter, sleeker, advanced display and audio, etc. compared to the previous one.]]></description>
            <content:encoded><![CDATA[<p>The story started with TCL RayNeo X2 and TCL NXTWEAR S at the beginning of the year, continued by TCL announcing RayNeo Air 2. Yes, you spotted well, they never had a RayNeo Air 1  Although they claim it is lighter, sleeker, advanced display and audio, etc. compared to the previous one.</p>
<p>One of the big advancements is in display brightness, compared to an NXTWEAR S, it has +25% increase, to an impressive 600 nits. This enables to use it outdoors, and with the added option to make the shades permanent part of the glasses it creates a stylish look reminiscent of dark sunglasses.</p>
<p>Furthermore, it has an increased FoV, to 46 degrees, this matches XReal Air 2 and comes close to Rokid's 49 degree FoV. Although pure VR glasses has more extensive FoV, smart glasses like this one have a portable advantage, making them ideal for on-the-go media consumption and enhanced productivity devices.</p>
<p>When comes to software, it's still running the RayNeo XR software providing the AR experience - similar to Nebula for the Xreal glasses. It provides seamless integration for Samsung phones, tablets, with multi window browsing, media streaming, gaming, etc. Although their Android support is better, it still supports being a MiraScreen target to support mirroring iPhone experience.</p>
<p><img loading="lazy" alt="The new RayNeo Air 2" src="https://zenith.finos.org/assets/images/rayneoair2-b7b326e8acb95b93e7673190c89512bf.png" width="1324" height="693" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Azure Quantum Elements]]></title>
            <link>https://zenith.finos.org/blog/azure-quantum-elements</link>
            <guid>https://zenith.finos.org/blog/azure-quantum-elements</guid>
            <pubDate>Wed, 15 Nov 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Azure Quantum Elements is a new service that integrates the latest breakthroughs in high-performance computing, artificial intelligence, and quantum computing to accelerate scientific discovery in chemistry and materials science. It enables researchers to scale, speed up, and improve the accuracy of molecular simulations using cloud-based workflows, AI models, and quantum tools. Also, it provides a natural language interface called Copilot, which helps researchers generate code, query and visualize data, and initiate simulations using conversational interactions. As the name implies, is part of the Azure Quantum ecosystem, which offers quantum hardware, software, and solutions in a single cloud service. Looking at the future, it is based on Microsoft’s roadmap to a quantum supercomputer, which aims to solve some of the most challenging problems in nature using a new type of stable qubit.]]></description>
            <content:encoded><![CDATA[<p><a href="https://quantum.microsoft.com/en-us/our-story/quantum-elements-overview" target="_blank" rel="noopener noreferrer">Azure Quantum Elements</a> is a new service that integrates the latest breakthroughs in high-performance computing, artificial intelligence, and quantum computing to accelerate scientific discovery in chemistry and materials science. It enables researchers to scale, speed up, and improve the accuracy of molecular simulations using cloud-based workflows, AI models, and quantum tools. Also, it provides a natural language interface called Copilot, which helps researchers generate code, query and visualize data, and initiate simulations using conversational interactions. As the name implies, is part of the Azure Quantum ecosystem, which offers quantum hardware, software, and solutions in a single cloud service. Looking at the future, it is based on Microsoft’s roadmap to a quantum supercomputer, which aims to solve some of the most challenging problems in nature using a new type of stable qubit.</p>]]></content:encoded>
            <category>quantum</category>
        </item>
        <item>
            <title><![CDATA[Behind the Mask of Ethics in AI]]></title>
            <link>https://zenith.finos.org/blog/ai-ethics</link>
            <guid>https://zenith.finos.org/blog/ai-ethics</guid>
            <pubDate>Fri, 06 Oct 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Every morning when I check the news I see another story about concerns and horror stories in AI, talking about how it will end the world.Or even some horrid examples of how it’s used today for nefarious purposes. This fear then morphs into hope through infectious optimism when we log in to work in our various roles across technology.]]></description>
            <content:encoded><![CDATA[<p>Every morning when I check the news I see another story about concerns and horror stories in AI, talking about how it will end the world.Or even some horrid examples of how it’s used today for nefarious purposes. This fear then morphs into hope through infectious optimism when we log in to work in our various roles across technology.</p>
<p><img loading="lazy" alt="Masks on the Wall" src="https://zenith.finos.org/assets/images/finan-akbar-HuC3cii5VA8-unsplash-03d649418ce097718637f07b956c215a.jpg" width="4240" height="2832" class="img_ev3q">
<em>Source: <a href="https://unsplash.com/photos/HuC3cii5VA8" target="_blank" rel="noopener noreferrer">Unsplash</a></em></p>
<p>Sometimes it can feel like we need to switch masks to find the costume that fits our current AI audience. Or that we often have mixed feelings about their use based on our experiences using them, or the direct impact they have on our own lives. The reality is that we need to call for pragmatism and adhere to best practices in ethics to ensure AI technology. Only then can we ensure is used responsibly, securely and to the benefit of this world.</p>
<p>Given it’s October, and Halloween is vastly approaching, let’s consider some of the ghouls and goblins haunting the field of AI ethics, along with the state of best practices in the field.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="bias-and-transparency">Bias and Transparency<a href="https://zenith.finos.org/blog/ai-ethics#bias-and-transparency" class="hash-link" aria-label="Direct link to Bias and Transparency" title="Direct link to Bias and Transparency">​</a></h2>
<p>Transparency of training datasets is an important consideration for assessing the potential for bias in the results generated by a given machine learning model or algorithm. When using algorithms in our applications without the ability to scrutinise the training datasets, we open ourselves to building applications that contain the biases mirrored in our society.</p>
<p>There are many stories about this that have existed for a considerable period of time. The book and podcast <a href="https://www.waterstones.com/book/invisible-women/caroline-criado-perez/9781784706289" target="_blank" rel="noopener noreferrer">Invisible Women by Caroline Criado-Perez</a> talks about the emergence of various technologies and innovations throughout history where their design negatively impacts women, from the <a href="https://www.theguardian.com/lifeandstyle/2019/feb/23/truth-world-built-for-men-car-crashes" target="_blank" rel="noopener noreferrer">use of crash test dummies based around men from the 1950s until 2011</a> to AI within the healthcare industry being less likely to diagnose medical conditions in women and other diverse populations. Or even <a href="https://youtu.be/nwhwNfO1WMQ?t=1507" target="_blank" rel="noopener noreferrer">this admission from Tom Cools at Devoxx UK</a> that his attempts to automate the attendance register went wrong when racial bias in the machine learning algorithm he used wouldn’t register one of his students. But we also need to consider other points of bias such as political affiliation <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8967082/" target="_blank" rel="noopener noreferrer">as covered in this paper from Uwe Peters</a> or even the skewing of results by misinformation.</p>
<p>To build applications and technologies and automate away the complexities of life in this AI world in a way that reflects our amazing and diverse society, we need visibility into how they are trained, as well as potential outputs. Model cards are a crucial first step to solving this problem. Quite simply, model cards are documentation that describes useful aspects of a model including:</p>
<ol>
<li>Intended use cases</li>
<li>Limitations</li>
<li>Ethical considerations</li>
<li>Datasets used in training</li>
<li>Results of experiments to test the model</li>
</ol>
<p>Not all models have a publicly available card. Prominent repositories are adopting this practice, including <a href="https://huggingface.co/" target="_blank" rel="noopener noreferrer">Hugging Face</a> which has a <a href="https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md" target="_blank" rel="noopener noreferrer">publicly available template</a>. Not all models out there have full specifications or details of their datasets to balance with competition considerations. The situation is also the same with dataset cards, <a href="https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md" target="_blank" rel="noopener noreferrer">for which there is also a template available</a>, where often details of biases in datasets are not transparent. For both, we need an agreed standard and charter committing to provide these assets.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="model-interference-and-validation">Model Interference and Validation<a href="https://zenith.finos.org/blog/ai-ethics#model-interference-and-validation" class="hash-link" aria-label="Direct link to Model Interference and Validation" title="Direct link to Model Interference and Validation">​</a></h2>
<p>Just like applications built without cutting-edge technologies, AI technologies are susceptive to security attacks and manipulation. With the emergence of LLMs, OWASP has created an official <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0_1.pdf" target="_blank" rel="noopener noreferrer">Top 10 for Large Language Model Applications</a>.</p>
<p>Many of the threats listed relate to tampering with data, prompts or even the outputs which can lead to the spread of misinformation or malicious activity in the wrong hands.  Software libraries exist allowing for the manipulation of these models that, in the event of model theft or duplication, could be used to provide malicious results that cause reputation damage to institutions.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="data-privacy">Data Privacy<a href="https://zenith.finos.org/blog/ai-ethics#data-privacy" class="hash-link" aria-label="Direct link to Data Privacy" title="Direct link to Data Privacy">​</a></h2>
<p>It’s not even the case that knowledge of programming is required to potentially manipulate these algorithms. Staying with the LLM example, researchers at IBM reported in August results from an experiment where they were able to hypnotize several LLMs to trick them into releasing confidential information and creating malicious code. In heavily regulated industries such as financial services, this can lead to not only significant reputational damage but also sanctions and fines by regulators across the world.</p>
<p>Moving away from attacks, exposure to proprietary customer information can also be a concern. As covered in a recent FINOS Zenith Brain Trust meeting, the state of licensing for models, and also the code for these technologies, is not mature or well understood.</p>
<p>Even with well-understood licensing, we need to consider the usability challenges this can introduce to developers using these technologies in applications, as well as employees using the tools as part of automating their own work. Just in our personal lives when we click accept on the license terms whose length rivals War and Peace without reading them, there is a potential that ignorance of the terms may lead to the leaking of private information as training for models and algorithms.</p>
<p>Many organizations are already providing company guidelines on the use of AI tools and rules on the exposure of private data, including lists of allowed tools. They are also looking to use on-prem technologies and self-hosted LLMS to prevent data leaks. Even in these cases, the license terms must be carefully reviewed to ensure compliance.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="model-governance">Model Governance<a href="https://zenith.finos.org/blog/ai-ethics#model-governance" class="hash-link" aria-label="Direct link to Model Governance" title="Direct link to Model Governance">​</a></h2>
<p>Transparency in the use of these models within organizations is pivotal to determining which AI technologies are used within the company. Much like the need to track software dependencies to remediate security vulnerabilities quickly, models and AI software must be included in software inventories. Consider the case where a particular model is found to be compromised, or open to a particular security issue.</p>
<p>We need to know exactly where it is used, as well as which environments (production versus development) to act quickly to remediate the problem. <a href="https://www.ibm.com/docs/en/cloud-paks/cp-data/4.6.x?topic=governance-ai-factsheets" target="_blank" rel="noopener noreferrer">AI Factsheets and inclusion of models in existing software inventories</a> are recommended as the approach to handle these concerns.</p>
<p>Another consideration would be the potential accreditation of these technologies, or the organizations building the AI software you use. Governance and accreditation by leading organizations such as <a href="https://ai4good.org/" target="_blank" rel="noopener noreferrer">AI For Good</a> could help in the decision-making process alongside license review on which models from which organizations meet the required ethical standards that you wish to adhere to, as well as following best practices from these organizations on their usage.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="application-terms-of-use">Application Terms of Use<a href="https://zenith.finos.org/blog/ai-ethics#application-terms-of-use" class="hash-link" aria-label="Direct link to Application Terms of Use" title="Direct link to Application Terms of Use">​</a></h2>
<p>Terms of use of not just AI technologies, but the applications we build that make use of them, is the final consideration within the ethical AI space. Part of the reason that we hear such terrifying stories of the impact of their usage is because of a lack of published and enforced codes of conduct stating how this technology should be used.</p>
<p>There is a balance in the ethics of usage. From the extreme misuse such as the example in <a href="https://www.bbc.co.uk/news/world-europe-66877718" target="_blank" rel="noopener noreferrer">Spain of AI image technology being used to generate naked images of young girls</a> to the more grey area of <a href="https://www.bbc.co.uk/programmes/w3ct5d93" target="_blank" rel="noopener noreferrer">uploading AI-generated videos of dead children to TikTok for education on child violence that caused legitimate upset to families</a>, applications without explicit terms of use and blocking of violators can find themselves inadvertently being used for nefarious activities. Much like sharing software such as <a href="https://cloudcovermusic.com/choosing-music-guide/limewire-napster/" target="_blank" rel="noopener noreferrer">Napster and LiveWire were shut down for copyright infringement due to users sharing music</a>, or <a href="https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html" target="_blank" rel="noopener noreferrer">the use of data on social media platforms by Cambridge Analytica to influence voters</a>, software intended to change the world and benefit people can often be used in unintended and unethical ways.</p>
<p>There is guidance out there from some model makers, such as the <a href="https://ai.meta.com/llama/responsible-use-guide/" target="_blank" rel="noopener noreferrer">Responsible Use Guidelines from Meta for developers</a> which is referenced in the model cards of their models. But much like the communities and meetups that developers belong to, established codes of conduct in an easily digestible format that outline the acceptable activities in that group are also required to make clear the intended usage of these technologies. This must also extend the applications we build using these technologies, for example, AI image generation applications, stating what is and is not okay.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="conclusion">Conclusion<a href="https://zenith.finos.org/blog/ai-ethics#conclusion" class="hash-link" aria-label="Direct link to Conclusion" title="Direct link to Conclusion">​</a></h2>
<p>While these ethical challenges exist within the current ecosystem, and make adoption by financial services institutions difficult, with clear ethics and terms of use there is no reason that these technologies can't be used to improve the state of our lives and products.</p>
<p>AI is not all about replacing humans. One great recent example that leaves me with hope is the use of AI software trained on <a href="https://www.bbc.co.uk/news/health-66921926" target="_blank" rel="noopener noreferrer">keyhole brain surgery to help train neurosurgeons without putting patients at risk, as well as highlighting key areas and tumours</a>. Far from aiming to eliminate surgeons, it sets out a future to make brain surgery safer.</p>
<p>We still need to think about the malicious applications of our software. But we should also take advantage of the benefits AI brings.</p>]]></content:encoded>
            <category>artificial-intelligence</category>
        </item>
        <item>
            <title><![CDATA[Zenith Brain Trust Blog]]></title>
            <link>https://zenith.finos.org/blog/brain-trust-ai-reproducability</link>
            <guid>https://zenith.finos.org/blog/brain-trust-ai-reproducability</guid>
            <pubDate>Mon, 25 Sep 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The first of the key topics discussed by the brain trust really focused on the complexities and challenges associated with achieving reproducibility in the context of AI models.]]></description>
            <content:encoded><![CDATA[<p>The first of the key topics discussed by the brain trust really focused on the complexities and challenges associated with achieving reproducibility in the context of AI models. </p>
<p>From a Financial Services standpoint, ensuring that your ability to recreate the results of a particular query is essential; especially in the eyes of our regulators! This problem is exceptionally complex, compounded by the complexity of models, presenting a significant barrier to full reproducibility and adoption.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="transparency-and-reproducibility-challenges">Transparency and Reproducibility Challenges<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#transparency-and-reproducibility-challenges" class="hash-link" aria-label="Direct link to Transparency and Reproducibility Challenges" title="Direct link to Transparency and Reproducibility Challenges">​</a></h2>
<p>AI models, particularly deep learning models, are known for their intricate architectures and data-intensive training processes. During the discussion, it became evident that the sheer complexity of these models presents a significant barrier to full reproducibility as well as transparency into their inner workings.</p>
<p>One key challenge lies in the stochastic nature of AI model training. Random processes, such as weight initialization and data shuffling, introduce variability into the training process. Even slight variations in these random factors can lead to different outcomes when attempting to reproduce the model.</p>
<p>The conversation highlighted the critical role of software environments in reproducibility. Managing dependencies, libraries, and software versions becomes crucial when trying to replicate a model's results. Any changes in these components can result in variations in model performance. Providing consistent results in regulated domains such as ours is important given the impact misleading results can have on clients and materials produced using these models.</p>
<p>Furthermore, the use of hardware accelerators like GPUs adds another layer of complexity. Driver and firmware updates for these accelerators can impact model training, necessitating careful management of hardware configurations.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="data-considerations">Data Considerations<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#data-considerations" class="hash-link" aria-label="Direct link to Data Considerations" title="Direct link to Data Considerations">​</a></h2>
<p>Data is the cornerstone of AI model training, and it presents its own set of reproducibility challenges. Datasets can evolve over time, with new data points being added or existing ones modified. Ensuring that the data used for model training remains static or adequately versioned is essential for reproducibility.</p>
<p>The discussion emphasized the importance of transparency in AI model development. Understanding the data sources, preprocessing steps, and hyperparameter choices is critical for others attempting to reproduce the results. The call for more comprehensive model documentation, akin to research papers, is growing louder within the AI community through mechanisms such as <a href="https://huggingface.co/blog/model-cards" target="_blank" rel="noopener noreferrer">data model cards</a> and <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/factsheets-model-inventory.html?audience=wdp&amp;context=cpdaas" target="_blank" rel="noopener noreferrer">model factsheets</a></p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="model-transparency-best-practices">Model Transparency Best Practices<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#model-transparency-best-practices" class="hash-link" aria-label="Direct link to Model Transparency Best Practices" title="Direct link to Model Transparency Best Practices">​</a></h2>
<p>While full reproducibility in AI models may be elusive due to inherent complexity and randomness, there are steps organizations and researchers can take to enhance it, as illustrated in the below diagram:</p>
<p><img loading="lazy" alt="Reproducability in AI Models Steps" src="https://zenith.finos.org/assets/images/brain_trust_AI_1-6339c446c54f37a741ec831e2ca5e72c.png" width="602" height="335" class="img_ev3q"></p>
<p>Let’s dive into each of these approaches in turn:</p>
<h3 class="anchor anchorWithStickyNavbar_LWe7" id="1-embrace-version-control">1. Embrace Version Control<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#1-embrace-version-control" class="hash-link" aria-label="Direct link to 1. Embrace Version Control" title="Direct link to 1. Embrace Version Control">​</a></h3>
<p>Implement robust version control for both code and data. Track changes meticulously to ensure that others can replicate your work accurately.</p>
<h3 class="anchor anchorWithStickyNavbar_LWe7" id="2-prioritize-comprehensive-documentation">2. Prioritize Comprehensive Documentation<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#2-prioritize-comprehensive-documentation" class="hash-link" aria-label="Direct link to 2. Prioritize Comprehensive Documentation" title="Direct link to 2. Prioritize Comprehensive Documentation">​</a></h3>
<p>Maintain detailed documentation that encompasses the model architecture, training process, data sources, and hyperparameters used. This documentation serves as a crucial guide for reproducibility.</p>
<h3 class="anchor anchorWithStickyNavbar_LWe7" id="3-leverage-containerization">3. Leverage Containerization<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#3-leverage-containerization" class="hash-link" aria-label="Direct link to 3. Leverage Containerization" title="Direct link to 3. Leverage Containerization">​</a></h3>
<p>Utilize containerization technologies like Docker to encapsulate your software environment. Containers facilitate sharing and ensure consistent execution across different setups.</p>
<h3 class="anchor anchorWithStickyNavbar_LWe7" id="4-advocate-for-standardized-practices">4. Advocate for Standardized Practices<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#4-advocate-for-standardized-practices" class="hash-link" aria-label="Direct link to 4. Advocate for Standardized Practices" title="Direct link to 4. Advocate for Standardized Practices">​</a></h3>
<p>Support and adhere to standardized practices and guidelines within the AI community. These standards promote consistency and improve the reproducibility of AI models.</p>
<h3 class="anchor anchorWithStickyNavbar_LWe7" id="5-promote-transparency">5. Promote Transparency<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#5-promote-transparency" class="hash-link" aria-label="Direct link to 5. Promote Transparency" title="Direct link to 5. Promote Transparency">​</a></h3>
<p>Transparency is key to reproducibility. Share not only the final model but also the knowledge and best practices associated with it. Document the entire pipeline, from data collection to model deployment.</p>
<p>While achieving full reproducibility in AI models may remain a challenge due to the inherent complexity and randomness, these recommendations provide a solid foundation for enhancing the replicability of AI research and applications.</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="licensing--legal-considerations">Licensing &amp; Legal Considerations<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#licensing--legal-considerations" class="hash-link" aria-label="Direct link to Licensing &amp; Legal Considerations" title="Direct link to Licensing &amp; Legal Considerations">​</a></h2>
<p>The realm of AI is rapidly evolving, and with it come a host of licensing and legal considerations. A recent discussion with our Brain Trust delved into the complex (and undefined) landscape of AI model usage, copyright, and the evolving role of corporate lawyers.</p>
<p>At the core of all of our discussion and concerns was the need for explainability in AI systems, even when humans are involved in decision-making. The challenge is that many AI technologies currently lack the ability to provide clear explanations for their outputs. This gap raises concerns about transparency and accountability.</p>
<p>Without the ability to explain how AI models arrive at certain decisions, organizations may face hurdles in using these technologies. This, in turn, has significant legal implications, especially in highly regulated sectors like finance.</p>
<p>On top of this, we noted that existing legal tools and processes are not quite caught up with the sorts of questions large enterprise have around copyright, provenance, and licensing concerning AI models. These tools simply haven't been tested thoroughly in this emerging field. No amount of <a href="https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute" target="_blank" rel="noopener noreferrer">selfie-taking monkeys</a> are going to help satisfy these queries until we really get <a href="https://hackernoon.com/doe-vs-github-ammended-complaints-on-copyright-infridgement-open-source-licenses-and-more" target="_blank" rel="noopener noreferrer">deep into the subject matter</a>.</p>
<p><img loading="lazy" alt="Portrait of a female Macaca nigra (Celebes crested macaque) in North Sulawesi, Indonesia, who triggered photographer David Slater&amp;#39;s camera." src="https://zenith.finos.org/assets/images/brain_trust_AI_2-e19a4946924a0004d4e00bbf8c2b9075.png" width="245" height="238" class="img_ev3q"></p>
<p>Our colleagues in Corporate Law find themselves at a crossroads, unsure whether to adopt a liberal or conservative approach to licensing. An interesting case cited during the discussion was Microsoft's <a href="https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/" target="_blank" rel="noopener noreferrer">acceptance of copyright liability for all uses of its Copilot AI</a>, a move that leaves corporate lawyers pondering the implications.</p>
<p>As AI technologies become more integrated into industries, there's a growing need for legal frameworks that can adapt to the nuances of AI:</p>
<ul>
<li>Current legal structures and copyright laws might not adequately address the complexities of AI model usage.</li>
<li>Establishing legal guidelines and frameworks for AI model licensing, copyright, and accountability will be essential.</li>
</ul>
<p>It's a dynamic field where the law is racing to catch up with technology. This is going to be the subject of a Brain Trust subcommittee to be planned for the near future (So stay tuned!).</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="security--data-integrity-in-ai-models">Security &amp; Data Integrity in AI Models<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#security--data-integrity-in-ai-models" class="hash-link" aria-label="Direct link to Security &amp; Data Integrity in AI Models" title="Direct link to Security &amp; Data Integrity in AI Models">​</a></h2>
<p>The rapid advancement of artificial intelligence (AI) technologies brings with it a pressing concern: security and data integrity. The Brain Trust had a number of thoughts on the intricacies of ensuring the safety and reliability of AI models in a rapidly evolving landscape.</p>
<p>One of the key points raised during the discussion was the importance of securing AI models. AI models can be vulnerable to various types of attacks, including:</p>
<ul>
<li>Data poisoning</li>
<li>Adversarial attacks</li>
<li>Model inversion attacks.</li>
</ul>
<p>The security challenge extends beyond protecting AI models from external threats. It also involves implementing robust identity and access management (IAM) systems to ensure that only authorized individuals or systems can interact with these models, especially when considering privileged access on the cloud.</p>
<p>Participants highlighted the need for establishing provenance and a chain of custody for AI models. These mechanisms are crucial for tracking the origin and lineage of AI models, ensuring their integrity throughout their lifecycle. In sectors like finance, where accountability and auditability are paramount, having a clear record of an AI model's history becomes essential, allowing organizations to demonstrate compliance with regulatory requirements.</p>
<p>The discussion underscored the dynamic nature of AI security. As AI technology evolves, so do the threats and vulnerabilities. Organizations must adopt a proactive approach to future-proof their AI security measures. Collaboration among industry experts and researchers can play a crucial role in staying ahead of emerging threats. Regularly updating security protocols and mechanisms is essential to mitigate risks effectively. This will definitely need more discussion and scrutiny in the future!</p>
<h2 class="anchor anchorWithStickyNavbar_LWe7" id="ai-taxonomy--standards">AI Taxonomy &amp; Standards<a href="https://zenith.finos.org/blog/brain-trust-ai-reproducability#ai-taxonomy--standards" class="hash-link" aria-label="Direct link to AI Taxonomy &amp; Standards" title="Direct link to AI Taxonomy &amp; Standards">​</a></h2>
<p>In the fast-evolving landscape of artificial intelligence (AI), establishing clear taxonomies and standards is emerging as a critical enabler for the industry's growth and interoperability.</p>
<p>One of the fundamental challenges in AI is the lack of a universal language to describe AI components, models, and systems. Taxonomy, in this context, serves as a structured framework that classifies and categorizes AI elements, making it easier for stakeholders to communicate effectively. Creating a shared understanding of AI terminology is vital for collaboration and interoperability, especially in an ecosystem where diverse tools, frameworks, and technologies coexist.</p>
<p>Participants emphasized the importance of developing standards that allow different AI systems to seamlessly interact with each other. This pursuit of interoperability extends to various domains, including finance, where data sharing and integration are paramount. Standardizing interfaces and data formats fosters an ecosystem where AI models and tools can work together harmoniously. It reduces integration friction, accelerates innovation, and enables organizations to leverage AI more effectively.</p>
<p>This will be the subject of further conversation with the Interoperability SIG and FINOS Labs at a later date.
Taxonomies and standards also play a crucial role in ensuring the ethical and secure use of AI. Establishing guidelines for responsible AI practices is essential in addressing concerns related to bias, fairness, and transparency. In the financial sector, where regulatory compliance is a primary concern, adhering to ethical standards becomes a competitive advantage. It builds trust among customers and regulators while minimizing legal risks.</p>
<p>Developing comprehensive taxonomies and standards requires input from diverse stakeholders, including researchers, policymakers, and industry experts. This is going to become a key objective of the wider FINOS initiative as we explore many other emerging technologies in the future, so it’s going to be a good initial focus for future Brain Trust meetings.
Looking ahead, the ongoing journey that demands active participation from the wider AI community and a commitment to ethical, secure, and interoperable AI systems will shape the future of AI. With the proper taxonomies and standards, we lay the foundation for innovation, trust, and responsible AI practices. It's a collective effort that promises to drive the industry forward while ensuring that AI benefits everyone.</p>
<p><strong><a href="mailto:zenith-leadership@lists.finos.org" target="_blank" rel="noopener noreferrer">The Zenith Team</a></strong></p>]]></content:encoded>
            <category>brain trust</category>
        </item>
        <item>
            <title><![CDATA[Google adds PQC to Chrome]]></title>
            <link>https://zenith.finos.org/blog/chrome-pqc</link>
            <guid>https://zenith.finos.org/blog/chrome-pqc</guid>
            <pubDate>Sat, 12 Aug 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Google announced plans to add support for quantum-resistant encryption algorithms in its Chrome browser, starting with version 116. The chosen algorithm, X25519Kyber768, combines the output of X25519, an elliptic curve algorithm widely used for key agreement in TLS, and Kyber-768 to create a strong session key to encrypt TLS connections, providing roughly the security equivalent of AES-192.]]></description>
            <content:encoded><![CDATA[<p>Google <a href="https://thehackernews.com/2023/08/enhancing-tls-security-google-adds.html" target="_blank" rel="noopener noreferrer">announced</a> plans to add support for quantum-resistant encryption algorithms in its Chrome browser, starting with version 116. The chosen algorithm, X25519Kyber768, combines the output of X25519, an elliptic curve algorithm widely used for key agreement in TLS, and Kyber-768 to create a strong session key to encrypt TLS connections, providing roughly the security equivalent of AES-192.</p>
<p>While the equivalency of AES-192 may not provide a security improvement for many adopters today, the general inclusion of PQC algorithms is a welcomed improvement in what will be a multi-year adoption journey across the industry to protect user network traffic against future quantum cryptanalysis.</p>]]></content:encoded>
            <category>quantum-computing</category>
        </item>
        <item>
            <title><![CDATA[Microsoft announces NVD5]]></title>
            <link>https://zenith.finos.org/blog/nvd5</link>
            <guid>https://zenith.finos.org/blog/nvd5</guid>
            <pubDate>Mon, 07 Aug 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Microsoft announced the general availability of the Azure NDV5 VM series. Each NDV5 VM comes with 8 NVIDIA H100 GPUs, each providing 3,958 teraFLOPS (8-bit FP8), 80GB of GPU memory, and 3.35TB/s of GPU memory bandwidth. The 8 GPUs are interconnected through NVLink4 enabling them to communicate with each other at 900 GB/s. Each GPU connects to the CPU through PCle5 at 64 GB/s.]]></description>
            <content:encoded><![CDATA[<p>Microsoft <a href="https://azure.microsoft.com/en-us/blog/scale-generative-ai-with-new-azure-ai-infrastructure-advancements-and-availability/" target="_blank" rel="noopener noreferrer">announced</a> the general availability of the Azure NDV5 VM series. Each NDV5 VM comes with 8 NVIDIA H100 GPUs, each providing 3,958 teraFLOPS (8-bit FP8), 80GB of GPU memory, and 3.35TB/s of GPU memory bandwidth. The 8 GPUs are interconnected through NVLink4 enabling them to communicate with each other at 900 GB/s. Each GPU connects to the CPU through PCle5 at 64 GB/s.</p>
<p>Across different VMs, GPUs are interconnected through ConnectX7 InfiniBand, enabling them to communicate with each other at 400Gb/s per GPU (i.e., 3.2 Tb/s per VM). These best-in-class GPU connectivity options substantially improve the training and inferencing performance of large language models (LLMs), which require heavy communication between GPUs. Azure allows easily scaling from 8 GPUs to a few tens/hundreds to many thousands of interconnected GPUs (referred as "super computers") depending on the compute needs of a particular inferencing or training workload.</p>]]></content:encoded>
            <category>artificial-intelligence</category>
        </item>
        <item>
            <title><![CDATA[Analog Iterative Machine's lightning-fast approach to optimization]]></title>
            <link>https://zenith.finos.org/blog/AIM-machines</link>
            <guid>https://zenith.finos.org/blog/AIM-machines</guid>
            <pubDate>Thu, 27 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Analog Iterative Machines introduce a new approach to optimization problems, which are often hard to solve using conventional digital computers. The approach is based on analog iterative machines, which are devices that use physical phenomena to perform computations in parallel and continuously. The article describes how analog iterative machines can solve optimization problems faster and more efficiently than digital computers, using examples from machine learning, physics, and engineering. It does also discuss the challenges and opportunities of developing analog iterative machines, such as hardware design, noise reduction, and scalability.]]></description>
            <content:encoded><![CDATA[<p>Analog Iterative Machines introduce a new approach to optimization problems, which are often hard to solve using conventional digital computers. The approach is based on analog iterative machines, which are devices that use physical phenomena to perform computations in parallel and continuously. <a href="https://www.microsoft.com/en-us/research/blog/unlocking-the-future-of-computing-the-analog-iterative-machines-lightning-fast-approach-to-optimization/" target="_blank" rel="noopener noreferrer">The article</a> describes how analog iterative machines can solve optimization problems faster and more efficiently than digital computers, using examples from machine learning, physics, and engineering. It does also discuss the challenges and opportunities of developing analog iterative machines, such as hardware design, noise reduction, and scalability.</p>
<p><img loading="lazy" alt="Meta Horizon Worlds users in a zoom call" src="https://zenith.finos.org/assets/images/AIM-BlogHero-1400x788-1-1280x720-553f94e4a75ec6eb771f792cce75c176.jpg" width="1280" height="720" class="img_ev3q"></p>]]></content:encoded>
            <category>artificial-intelligence</category>
        </item>
        <item>
            <title><![CDATA[DeepSpeed ZeRO++, a leap in speed for LLM and chat model training with 4X less communication]]></title>
            <link>https://zenith.finos.org/blog/deepspeed-zero</link>
            <guid>https://zenith.finos.org/blog/deepspeed-zero</guid>
            <pubDate>Thu, 27 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[DeepSpeed ZeRO++ is a new optimization technique that reduces the communication and memory overhead of training large language models (LLMs) and chat models. It enables training LLMs and chat models with up to 170 billion parameters on a single GPU, and up to 13 trillion parameters on 512 GPUs, with 4x less communication than previous methods like ZeRO-3. It also improves the scalability, efficiency, and flexibility of LLM and chat model training, allowing researchers to explore new frontiers in natural language processing.]]></description>
            <content:encoded><![CDATA[<p>DeepSpeed ZeRO++ is a <a href="https://www.microsoft.com/en-us/research/blog/deepspeed-zero-a-leap-in-speed-for-llm-and-chat-model-training-with-4x-less-communication/" target="_blank" rel="noopener noreferrer">new optimization technique</a> that reduces the communication and memory overhead of training large language models (LLMs) and chat models. It enables training LLMs and chat models with up to 170 billion parameters on a single GPU, and up to 13 trillion parameters on 512 GPUs, with 4x less communication than previous methods like ZeRO-3. It also improves the scalability, efficiency, and flexibility of LLM and chat model training, allowing researchers to explore new frontiers in natural language processing.</p>
<p><img loading="lazy" alt="Meta Horizon Worlds users in a zoom call" src="https://zenith.finos.org/assets/images/DeepSpeedZero-opp_figure1-edited-0bd9d778478d4642756d0985a809b2e1.png" width="1200" height="675" class="img_ev3q"></p>]]></content:encoded>
            <category>artificial-intelligence</category>
        </item>
        <item>
            <title><![CDATA[Quantum Computing in Finance - Two Systematic Reviews]]></title>
            <link>https://zenith.finos.org/blog/quantum-finance-review</link>
            <guid>https://zenith.finos.org/blog/quantum-finance-review</guid>
            <pubDate>Tue, 11 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Two papers were recently published summarizing the landscape of Quantum Computing developments in Finance. These papers help establish the common knowledge base allowing for further experimentation and use-case development that Zenith is looking to foster.]]></description>
            <content:encoded><![CDATA[<p>Two papers were recently published summarizing the landscape of Quantum Computing developments in Finance. These papers help establish the common knowledge base allowing for further experimentation and use-case development that Zenith is looking to foster.</p>
<p>The first <a href="https://arxiv.org/abs/2307.01155" target="_blank" rel="noopener noreferrer">Paper</a> provides an development overviews in portfolio optimization, fraud detection, and Monte Carlo methods for derivative pricing and risk calculation while also providing a comprehensive overview of the applications of quantum computing in the field of blockchain technology. The second <a href="https://www.nature.com/articles/s42254-023-00603-1" target="_blank" rel="noopener noreferrer">Paper</a> focuses on stochastic modelling, optimization and machine learning.</p>
<p><img loading="lazy" alt="Quantum Finance Chart" src="https://zenith.finos.org/assets/images/quantum-finance-12da98cdd8bb070bdaf68f3b6b011f56.jpg" width="1024" height="768" class="img_ev3q"></p>]]></content:encoded>
            <category>quantum-computing</category>
        </item>
        <item>
            <title><![CDATA[Meta releasing 'Game Super Resolution' technology for Quest]]></title>
            <link>https://zenith.finos.org/blog/gsr-meta-quest</link>
            <guid>https://zenith.finos.org/blog/gsr-meta-quest</guid>
            <pubDate>Mon, 10 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Meta Quest Super Resolution is a new feature for VR developers that uses Qualcomm’s Snapdragon Game Super Resolution technology to enhance the visuals of their apps or games. It works similar to AMD's FSR, but optimized for Adreno GPU in single pass, although not as powerful as DLSS from NVidia. It is better than normal sharpening and reduces blurring and artifacts, but it also has a GPU performance cost that varies depending on the content. It is not an AI system and it has some limitations, such as not supporting YUV textures and Cube maps. It will be available in the v55 Unity Integration SDK.]]></description>
            <content:encoded><![CDATA[<p>Meta Quest Super Resolution is a new feature for VR developers that uses Qualcomm’s Snapdragon Game Super Resolution technology to enhance the visuals of their apps or games. It works similar to AMD's FSR, but optimized for Adreno GPU in single pass, although not as powerful as DLSS from NVidia. It is better than normal sharpening and reduces blurring and artifacts, but it also has a GPU performance cost that varies depending on the content. It is not an AI system and it has some limitations, such as not supporting YUV textures and Cube maps. It will be available in the v55 Unity Integration SDK.</p>
<p><img loading="lazy" alt="Different sharpenings" src="https://zenith.finos.org/assets/images/Meta-Quest-Super-Resolution-5fc04f6c65358655883a66d9bc555354.jpg" width="1920" height="655" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Zoom joins the tools in Meta Horizon Workrooms]]></title>
            <link>https://zenith.finos.org/blog/zoom-meta-horizon</link>
            <guid>https://zenith.finos.org/blog/zoom-meta-horizon</guid>
            <pubDate>Mon, 10 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Meta Horizon Workrooms is a virtual office space that now integrates with Zoom, a widespread video conferencing tool.  You can join Zoom meetings from Workrooms in VR, or add Workrooms to any Zoom call. This way, you can enjoy the features of both tools, such as screen sharing, whiteboard, sticky notes, gestures, and web chat. You can use Workrooms with or without a headset.]]></description>
            <content:encoded><![CDATA[<p>Meta Horizon Workrooms is a virtual office space that now <a href="https://forwork.meta.com/horizon-workrooms/integrations/" target="_blank" rel="noopener noreferrer">integrates</a> with Zoom, a widespread video conferencing tool.  You can join Zoom meetings from Workrooms in VR, or add Workrooms to any Zoom call. This way, you can enjoy the features of both tools, such as screen sharing, whiteboard, sticky notes, gestures, and web chat. You can use Workrooms with or without a headset.</p>
<p><img loading="lazy" alt="Meta Horizon Worlds users in a zoom call" src="https://zenith.finos.org/assets/images/zoom-horizon-1bda305753cae68b275bab577f63fb7c.png" width="972" height="545" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Unity Muse and Unity Sentis powered by AI]]></title>
            <link>https://zenith.finos.org/blog/unity-muse-sentis</link>
            <guid>https://zenith.finos.org/blog/unity-muse-sentis</guid>
            <pubDate>Tue, 27 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Unity Muse and Unity Sentis are two new AI-powered tools that aim to help creators design immersive and interactive experiences. Unity Muse is a generative AI system that can produce high-quality content such as music, sound effects, dialogue, and animations. Unity Sentis is a cognitive AI system that can understand and respond to human emotions, expressions, and gestures. Together, these tools enable creators to craft rich and engaging stories that adapt to the user's preferences and emotions.]]></description>
            <content:encoded><![CDATA[<p><a href="https://blog.unity.com/engine-platform/introducing-unity-muse-and-unity-sentis-ai" target="_blank" rel="noopener noreferrer">Unity Muse and Unity Sentis</a> are two new AI-powered tools that aim to help creators design immersive and interactive experiences. Unity Muse is a generative AI system that can produce high-quality content such as music, sound effects, dialogue, and animations. Unity Sentis is a cognitive AI system that can understand and respond to human emotions, expressions, and gestures. Together, these tools enable creators to craft rich and engaging stories that adapt to the user's preferences and emotions.</p>
<p><img loading="lazy" alt="Unity Logo with AI next to it" src="https://zenith.finos.org/assets/images/AI-BetaBlogHeader-1230x410-891ffcb90fa331b026c6bd1bdadb1aaa.jpeg" width="1230" height="410" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
            <category>ai</category>
        </item>
        <item>
            <title><![CDATA[IBM Announces Utility Towards Useful Quantum]]></title>
            <link>https://zenith.finos.org/blog/ibm-quantum-utility</link>
            <guid>https://zenith.finos.org/blog/ibm-quantum-utility</guid>
            <pubDate>Mon, 26 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[IBM announced a significant breakthrough in quantum computing, demonstrating that quantum computers can produce accurate results on a scale of 100+ qubits, surpassing classical approaches. The team used the 127-qubit superconducting 'Eagle' device to generate large, entangled states that simulate the dynamics of spins in a model of material and accurately predict properties such as its magnetization. The experiment showed that a quantum computer, using advanced error mitigation techniques, outperformed classical simulations.]]></description>
            <content:encoded><![CDATA[<p>IBM <a href="https://research.ibm.com/blog/utility-toward-useful-quantum!" target="_blank" rel="noopener noreferrer">announced</a> a significant breakthrough in quantum computing, demonstrating that quantum computers can produce accurate results on a scale of 100+ qubits, surpassing classical approaches. The team used the 127-qubit superconducting 'Eagle' device to generate large, entangled states that simulate the dynamics of spins in a model of material and accurately predict properties such as its magnetization. The experiment showed that a quantum computer, using advanced error mitigation techniques, outperformed classical simulations.</p>
<p>This achievement marks a milestone in proving the utility of quantum computers as scientific tools capable of tackling complex problems that classical systems struggle with. In response to this breakthrough, IBM plans to upgrade its full fleet of IBM Quantum systems to large-scale quantum processors with a minimum of 127 qubits, enabling the exploration of an entirely new class of computational problems.</p>
<p>The impact of this development on financial institutions is potentially significant. With the availability of utility-scale quantum processors, financial organizations can explore the application of quantum computing to optimization problems. Collaborative working groups are being formed to identify and advance the use of quantum calculations in areas such as sustainability and finance, with the aim of leveraging quantum advantage to solve optimization problems. This indicates a growing recognition of the potential value quantum computing holds for the financial industry, as technology continues to advance and offer computational power beyond what classical systems can achieve.</p>
<p align="center"><img src="https://zenith.finos.org/img/blog/quantum-nature.png" alt="quantum-nature" width="50%" height="50%"></p>]]></content:encoded>
            <category>quantum-computing</category>
        </item>
        <item>
            <title><![CDATA[Meta Quest+, the new VR subscription service for Quest headsets]]></title>
            <link>https://zenith.finos.org/blog/meta-quest-plus</link>
            <guid>https://zenith.finos.org/blog/meta-quest-plus</guid>
            <pubDate>Mon, 26 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Joining of the likes of Microsoft's Game Pass, Sony Playstation's PlayStation+, and similar, Meta is also entering this market. The today announced Meta Quest+ is a new VR subscription service that offers access to a curated selection of games and apps on the Meta Quest platform. For a monthly or annual fee, subscribers can enjoy unlimited downloads and playtime of over 100 titles. Subscribers also get exclusive benefits such as discounts, free trials, and early access to new releases. Meta Quest+ is only available on the Meta Quest Store and has an introductory offer of $1 USD for the first month, followed by $7.99 USD per month or $59.99 USD per year.]]></description>
            <content:encoded><![CDATA[<p>Joining of the likes of Microsoft's Game Pass, Sony Playstation's PlayStation+, and similar, Meta is also entering this market. The today announced Meta Quest+ is a new VR subscription service that offers access to a curated selection of games and apps on the Meta Quest platform. For a monthly or annual fee, subscribers can enjoy unlimited downloads and playtime of over 100 titles. Subscribers also get exclusive benefits such as discounts, free trials, and early access to new releases. Meta Quest+ is only available on the <a href="https://www.oculus.com/experiences/quest/meta-quest-plus/" target="_blank" rel="noopener noreferrer">Meta Quest Store</a> and has an introductory offer of $1 USD for the first month, followed by $7.99 USD per month or $59.99 USD per year.</p>
<p><img loading="lazy" alt="Meta Quest+ logo" src="https://zenith.finos.org/assets/images/questplus-a5be3f7696c3d940086735e6f16cc204.png" width="1551" height="188" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Azure remote rendering supports Meta Quest headsets]]></title>
            <link>https://zenith.finos.org/blog/azure-remote-rendering</link>
            <guid>https://zenith.finos.org/blog/azure-remote-rendering</guid>
            <pubDate>Fri, 23 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Microsoft announced the public preview of Azure Remote Rendering support for Meta Quest 2 and Meta Quest Pro virtual reality headsets.  Azure Remote Rendering is a service that enables developers to render high-quality interactive 3D content and stream it to devices like HoloLens 2 or desktops/laptops in real time. Azure Remote Rendering uses hybrid rendering, which combines remote content with locally rendered content, and provides an easy way to integrate the service into existing applications. Customers and partners can now use Azure Remote Rendering on Meta Quest 2 and Meta Quest Pro for various use cases, such as CAD review, visualization, training, and pass-through. To get started with Azure Remote Rendering on Meta Quest 2 and Meta Quest Pro, developers can follow the updated guide for building for Meta Quest in the documentation.]]></description>
            <content:encoded><![CDATA[<p>Microsoft announced the public preview of Azure Remote Rendering support for Meta Quest 2 and Meta Quest Pro virtual reality headsets.  Azure Remote Rendering is a service that enables developers to render high-quality interactive 3D content and stream it to devices like HoloLens 2 or desktops/laptops in real time. Azure Remote Rendering uses hybrid rendering, which combines remote content with locally rendered content, and provides an easy way to integrate the service into existing applications. Customers and partners can now use Azure Remote Rendering on Meta Quest 2 and Meta Quest Pro for various use cases, such as CAD review, visualization, training, and pass-through. To get started with Azure Remote Rendering on Meta Quest 2 and Meta Quest Pro, developers can follow the <a href="https://learn.microsoft.com/en-us/azure/remote-rendering/" target="_blank" rel="noopener noreferrer">updated guide</a> for building for Meta Quest in the documentation.</p>
<p><img loading="lazy" alt="Azure Remote Rendering on a Quest headset" src="https://zenith.finos.org/assets/images/AzureRemoteRenderingShowcaseQuestHT-20230621-150648-3022ffdc69b94a2c53642ab05a7a51f4.jpg" width="999" height="999" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Apple announces visionOS SDK]]></title>
            <link>https://zenith.finos.org/blog/AVP-sdk</link>
            <guid>https://zenith.finos.org/blog/AVP-sdk</guid>
            <pubDate>Thu, 22 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Today Apple announced many resources for the Apple Vision Pro headset, among them are:]]></description>
            <content:encoded><![CDATA[<p>Today Apple announced many resources for the Apple Vision Pro headset, among them are: </p>
<ul>
<li><a href="https://developer.apple.com/visionos/" target="_blank" rel="noopener noreferrer">visionOS SDK</a></li>
<li><a href="https://www.figma.com/community/file/1253443272911187215/Apple-Design-Resources---visionOS" target="_blank" rel="noopener noreferrer">visionOS Design Resources on Figma</a></li>
<li><a href="https://developer.apple.com/design/human-interface-guidelines/designing-for-visionos" target="_blank" rel="noopener noreferrer">visionOS Human Interface Guidelines</a></li>
</ul>
<p>These resources among with the 46 episodes of training <a href="https://developer.apple.com/visionos/learn/" target="_blank" rel="noopener noreferrer">announced</a> at WWDC23 would be the base for the upcoming experiences people build.</p>
<p><img loading="lazy" alt="AVP headset" src="https://zenith.finos.org/assets/images/1687430217409-6372c90da4333239a315bf7615343c7b.jpeg" width="800" height="420" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
        <item>
            <title><![CDATA[Meta increasing Quest performance and have new Unity SDK]]></title>
            <link>https://zenith.finos.org/blog/meta-updates-0622</link>
            <guid>https://zenith.finos.org/blog/meta-updates-0622</guid>
            <pubDate>Thu, 22 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The new  version 55 software update for the Meta Quest 2 and Quest Pro VR headsets, increases their CPU and GPU performance and adds new features.  The Quest 2 will get a 19 percent GPU speed increase while Quest Pro owners will get an 11 percent jump. Both headsets will also get up to 26 percent performance increases in CPU power. The update also introduces Dynamic Resolution Scaling, Messenger app in VR, Explore tab with media content and Reels, and multi-touch gesture support for Meta Quest Browser.]]></description>
            <content:encoded><![CDATA[<p>The new  version 55 software update for the Meta Quest 2 and Quest Pro VR headsets, increases their CPU and GPU performance and adds new features.  The Quest 2 will get a 19 percent GPU speed increase while Quest Pro owners will get an 11 percent jump. Both headsets will also get up to 26 percent performance increases in CPU power. The update also introduces Dynamic Resolution Scaling, Messenger app in VR, Explore tab with media content and Reels, and multi-touch gesture support for Meta Quest Browser.</p>
<p>The Meta Quest 3 VR headset, which was announced earlier this month and should bring enhanced capabilities, won't be available until September. Unity introduces new development tools for mixed reality applications on Quest headsets. The headset will have color passthrough and improved spatial awareness for more immersive and realistic mixed reality experiences that fit better into everyday life. Unity continues to be the most popular development environment for mobile VR applications and has just released a Meta OpenXR package that supports features such as passthrough, plane detection, raycasting, and anchors for Quest headsets. Unity has also updated some sample projects that use AR Foundation, a framework for cross-platform development of AR applications for mobile devices and XR headsets. Developers who want to create mixed reality apps for Quest headsets need to use <a href="https://unity.com/releases/lts" target="_blank" rel="noopener noreferrer">Unity 2022 LTS</a>, <a href="https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@5.1/manual/index.html" target="_blank" rel="noopener noreferrer">AR Foundation</a>, <a href="https://docs.unity3d.com/Packages/com.unity.xr.meta-openxr@0.1/manual/index.html" target="_blank" rel="noopener noreferrer">Meta OpenXR package</a>, and Meta's <a href="https://developer.oculus.com/presence-platform/" target="_blank" rel="noopener noreferrer">Presence Platform</a>.</p>
<p><img loading="lazy" alt="Meta headset" src="https://zenith.finos.org/assets/images/_2ca7446a-d6e3-4c5e-9d22-3703c9bf7dae-2a6ef416699621a98e4aba1ea882428e.jpeg" width="1024" height="1024" class="img_ev3q"></p>]]></content:encoded>
            <category>spatial-computing</category>
        </item>
    </channel>
</rss>