Connect with us

Computers

Australia Forces Tech Firms to Pay News Providers for Content

Published

on

Facebook and Google will now have to pay news outlets in Australia for their content, after Australian legislators passed a law requiring tech firms to pay for content from news outlets within Australia.

Mark Zuckerberg, the founder of Facebook was party to discussions on the amendments to the laws.

During the negotiations, Facebook blocked users in Australia from using Facebook to share news. The service was restored on Tuesday following the agreements.

Per the new changes, Australia will exempt Facebook from the code if Facebook signs enough agreements with local news outlets to compensate them for news content.

All the tech firms affected by the new code have one month to achieve compliance. The amendment satisfied Rod Simms, who prepared the code in his role as competition regulator. Sims believed that the law would serve to reduce the imbalance between news outlets in Australia and tech giants Facebook and Google.

“All signs are good,” Sims explained. “The purpose of the code is to address the market power that clearly Google and Facebook have. Google and Facebook need media, but they don’t need any particular media company, and that meant media companies couldn’t do commercial deals.”

Australians were not too excited about Facebooks ban on sharing news because government as well as non-profit pages were also affected. Even public health organizations crucial to keeping people informed about the Covid-19 pandemic were not spared.

The effect of the new law will be to institute a groundbreaking protocol for handling disputes in Australia. Other governments will be keenly following the new process.

Australian Prime Minister Scott Morrison said that Bing would replace Google’s search engine if Google opted to pull out of Australia in protest of the new rules. Morrison even spoke personally to Satya Nadella, the CEO of Microsoft to discuss the scenario.

A conglomerate of 161 regional newspapers in Australia known as County Press Australia is apprehensive that smaller publications in smaller towns may not benefit from the deals committing tech firms to pay for news content.

Sims said he expected platforms Google and Facebook to first make deals with businesses in large city, but that all journalism outlets would benefit eventually.

“I don’t see any reason why anybody should doubt that all journalism will benefit,” he said.

“These things take time. Google and Facebook don’t have unlimited resources to go around talking to everybody. I think this has got a long way to play out,” he added.

Computers

The Dawn of AI-Integrated Computing: Microsoft’s New Copilot Key Revolutionizes PC Interaction

Published

on

By

In a groundbreaking move, Microsoft is set to transform personal computing by introducing an AI-specific key on keyboards, marking a significant leap in the integration of artificial intelligence in everyday technology. This development, starting with new computers running Windows 11, heralds a new era where generative AI technology becomes more accessible and intertwined with our daily digital interactions.

The Emergence of the Copilot Key

The new feature, known as the “Copilot key,” is designed to launch Microsoft’s AI chatbot, a direct product of its collaboration with OpenAI, the creators of ChatGPT. This initiative is not just a technological advancement but a strategic move by Microsoft to leverage its partnership with OpenAI, transforming its software into a gateway for generative AI applications (Voice of America).

Shifting Trends in AI Accessibility

While most people currently access the internet and AI applications via smartphones, this innovation by Microsoft is expected to ignite a competitive streak in the technology sector, especially in AI. However, the integration of AI into such common devices raises several ethical and legal questions. Notably, The New York Times recently initiated legal action against both OpenAI and Microsoft, citing concerns over copyright infringement by AI tools like ChatGPT and Copilot (The New York Times).

A Historical Perspective on Keyboard Design

The introduction of the AI key is Microsoft’s most significant alteration to PC keyboards since the debut of the special Windows key in the 1990s. The AI key, adorned with the Copilot logo, will be conveniently located near the space bar, replacing either the right “CTRL” key or a menu key on various computer models.

The Broader Context of Special Keys

Microsoft’s initiative follows a historical trend of special keys on keyboards. Apple pioneered this concept in the 1980s with its “Command” key, and Google introduced a search button on its Chromebooks. Google even experimented with an AI-specific key on its now-discontinued Pixelbook. However, Microsoft’s dominant position in the personal computer market, with agreements with major manufacturers like Lenovo, Dell, and HP, gives it a significant advantage. Approximately 82% of all desktop computers, laptops, and workstations run Windows, compared to 9% for Apple’s operating system and just over 6% for Google’s (IDC).

Industry Adoption and Future Prospects

Dell Technologies has already announced the inclusion of the Copilot key in its latest XPS laptops, and other manufacturers are expected to follow suit. Microsoft’s own Surface devices will also feature this key, with several companies anticipated to showcase their new models at the CES show in Las Vegas.

Conclusion

The introduction of the Copilot key by Microsoft is more than just a hardware innovation; it represents a paradigm shift in how we interact with our computers. By embedding AI directly into the keyboard, Microsoft is not only enhancing user experience but also paving the way for more advanced and intuitive computing. As we embrace this new era of AI-integrated computing, it is crucial to address the ethical and legal implications to ensure that this technological evolution benefits all users responsibly.

Continue Reading

Computers

The Future of AI and Quantum Computing: A Realistic Perspective

Published

on

By

In the rapidly evolving landscape of artificial intelligence (AI) and quantum computing, the opinions of industry leaders can significantly influence the direction of technological advancements. Yann LeCun, Meta’s chief AI scientist, recently offered a grounded perspective on these technologies, providing a contrast to the often hyperbolic narratives surrounding AI’s future capabilities and the potential of quantum computing.

AI’s Journey to Sentience: A Long Road Ahead

LeCun, a pioneer in deep learning, expressed skepticism about the imminent arrival of artificial general intelligence (AGI) – AI with human-level intelligence. Speaking at the Viva Tech conference in Paris, he highlighted the limitations of current AI systems, which, despite their ability to process vast amounts of text, lack the common sense necessary for true sentience. This view contrasts with Nvidia CEO Jensen Huang’s assertion that AI will rival human intelligence in less than five years, as reported by CNBC. LeCun’s stance reflects a more cautious and realistic assessment of AI’s current trajectory.

The Hype Around AGI and Quantum Computing

The pursuit of AGI has driven significant investment in AI research, particularly in language models and text data processing. However, LeCun points out that text is a “very poor source of information” for training AI systems to understand basic concepts about the world. He suggests that achieving even “cat-level” or “dog-level” AI is more likely in the near term than human-level AI. This perspective aligns with the broader consensus in the AI community that AGI remains a distant goal.

Multimodal AI: The Next Frontier

Meta’s research into multimodal AI systems, which combine text, audio, image, and video data, represents a significant step forward in AI development. These systems could potentially uncover hidden correlations between different types of data, leading to more advanced AI capabilities. For instance, Meta’s Project Aria augmented reality glasses, which blend digital graphics with the real world, demonstrate the potential of AI to enhance human experiences, such as teaching tennis techniques.

The Role of Hardware in AI’s Future

Nvidia’s graphics processing units (GPUs) have been instrumental in training large language models like Meta’s Llama AI software. As AI research progresses, the demand for more sophisticated hardware will likely increase. LeCun anticipates the emergence of new chips specifically designed for deep learning, moving beyond traditional GPUs. This shift could open up new possibilities in AI hardware development, potentially leading to more efficient and powerful AI systems.

Quantum Computing: Fascinating but Uncertain

LeCun also expressed doubts about the practical relevance of quantum computing, a field that has seen significant investment from tech giants like Microsoft, IBM, and Google. While quantum computing holds promise for certain applications, such as drug discovery, LeCun believes that many problems can be more efficiently solved with classical computers. This skepticism is shared by Meta senior fellow Mike Schroepfer, who views quantum technology as having a long time horizon before becoming practically useful.

A Balanced View on Technological Progress

LeCun’s views offer a balanced perspective on the future of AI and quantum computing, tempering the excitement with a realistic assessment of current capabilities and challenges. As the tech industry continues to explore these fields, it’s essential to maintain a critical eye on the practical implications and timelines of these technologies. The journey towards more advanced AI and the realization of quantum computing’s potential will likely be a long and complex one, requiring sustained effort and innovation.

In conclusion, while the future of AI and quantum computing is undoubtedly exciting, it’s important to approach these fields with a realistic understanding of their current state and potential. As LeCun’s insights suggest, the path to AGI and practical quantum computing is longer and more nuanced than some of the more optimistic predictions imply. The tech industry must continue to push the boundaries of what’s possible while remaining grounded in the realities of technological development.

Continue Reading

Computers

Holography’s New Frontier: Deep Learning Transforms 2D Images into 3D Holograms

Published

on

By

In the realm of visual technology, the quest for more immersive and realistic experiences never ceases. Holography, the science of creating three-dimensional images, has long been a subject of fascination and research. Now, a groundbreaking study led by Professor Tomoyoshi Shimobaba of the Graduate School of Engineering at Chiba University has introduced a novel deep-learning method that simplifies the creation of holograms. This innovation allows 3D images to be generated directly from 2D photos captured with standard cameras, marking a significant advancement in holographic technology.

Traditional holography involves capturing an object’s three-dimensional data and its interactions with light. This process demands high computational power and specialized cameras for capturing 3D images. This complexity has restricted the widespread adoption of holograms, despite their potential applications in various sectors like medical imaging, manufacturing, and virtual reality.

Deep learning has been making waves in the technology sector, and its application in holography is no exception. Previous methods have employed deep learning to create holograms directly from 3D data captured using RGB-D cameras, which capture both color and depth information of an object. This approach has circumvented many computational challenges associated with traditional holography.

The team from Chiba University proposes a different approach based on deep learning that further streamlines hologram generation. Their method employs a sequence of three deep neural networks to transform a regular 2D color image into data that can be used to display a 3D scene or object as a hologram. The first neural network predicts the associated depth map from the color image, providing information about the 3D structure of the image. The second and third neural networks are responsible for generating and refining the hologram, respectively.

One of the most striking aspects of this new method is its speed. The researchers found that their approach outperforms current high-end graphics processing units in speed. Moreover, the method is cost-effective as it doesn’t require expensive equipment like RGB-D cameras after the training phase.

The implications of this research are far-reaching. In the automotive industry, for instance, this technology could revolutionize in-vehicle holographic systems, presenting necessary information to passengers in 3D. The U.S. Department of Transportation has been exploring the potential of such advanced display technologies for enhancing road safety. Additionally, the technology could find applications in high-fidelity 3D displays, heads-up, and head-mounted displays, further augmenting the development of ubiquitous holographic technology.

The introduction of deep learning into the field of holography has the potential to solve many of the challenges that have hindered its widespread adoption. By simplifying the process and making it more cost-effective, this new method could pave the way for holography to become a more integral part of our daily lives, from healthcare to transportation and beyond.

The research, titled “Multi-depth hologram generation from two-dimensional images by deep learning,” was recently published in the journal Optics and Lasers in Engineering.

Continue Reading

Trending