Connect with us

Computers

Scientists fold the Smallest Microchips ever from Graphene

Published

on

New developments from physicists from the University of Sussex could lead to faster electronic gadgets. The physicists have created tiny microchip-like objects using ‘nano-origami.’ They foresee phones as well as computers operating many thousand times faster.

The researchers worked with 2D materials, including graphene. They used structural defects within the materials to build the microchips.

These defects affect the properties of the materials, both nano-mechanical and electronic.

The researchers pinpointed the effects of defects like grain boundaries, collapsed wrinkles, and folded wrinkles using Raman mapping and atomic force microscopy.

Graphene acts as a transistor when some distortions are folded into graphene. Transistors are the basic ingredient of electronics. When a graphene strip is folded like that it acts like a microchip.

The graphene strip in question is around 100 times tinier than normal microchips.

Lead researcher Dr. Manoj Tripathi explains the mechanism: “Instead of having to add foreign materials into a device, we’ve shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component like a transistor or a logic gate.”

The technique relies on Moore’s Law, a law which stipulates that the total sum of transistors within an integrated circuit doubles every two years.

Academics and leaders in the industry have warned that Moore’s law may not necessarily apply for transistors similar in size to silicon chips.

Graphene is a material that provides a possible alternative to silicon and can help to conserve Moore’s law. The researchers are the first to create a microchip using folded graphene.

Said Professor Alan Dalton: “We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami. Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future.”

“This kind of technology – ‘straintronics’ using nanomaterials as opposed to electronics – allows space for more chips inside any device. Everything we want to do with computers, to speed them up, can be done by crinkling graphene like this.”

Now the researchers are hopeful for further developments in sustainable technology because this process does not require additional materials and can go on at room temperature. It saves energy.

Computers

The Dawn of AI-Integrated Computing: Microsoft’s New Copilot Key Revolutionizes PC Interaction

Published

on

By

In a groundbreaking move, Microsoft is set to transform personal computing by introducing an AI-specific key on keyboards, marking a significant leap in the integration of artificial intelligence in everyday technology. This development, starting with new computers running Windows 11, heralds a new era where generative AI technology becomes more accessible and intertwined with our daily digital interactions.

The Emergence of the Copilot Key

The new feature, known as the “Copilot key,” is designed to launch Microsoft’s AI chatbot, a direct product of its collaboration with OpenAI, the creators of ChatGPT. This initiative is not just a technological advancement but a strategic move by Microsoft to leverage its partnership with OpenAI, transforming its software into a gateway for generative AI applications (Voice of America).

Shifting Trends in AI Accessibility

While most people currently access the internet and AI applications via smartphones, this innovation by Microsoft is expected to ignite a competitive streak in the technology sector, especially in AI. However, the integration of AI into such common devices raises several ethical and legal questions. Notably, The New York Times recently initiated legal action against both OpenAI and Microsoft, citing concerns over copyright infringement by AI tools like ChatGPT and Copilot (The New York Times).

A Historical Perspective on Keyboard Design

The introduction of the AI key is Microsoft’s most significant alteration to PC keyboards since the debut of the special Windows key in the 1990s. The AI key, adorned with the Copilot logo, will be conveniently located near the space bar, replacing either the right “CTRL” key or a menu key on various computer models.

The Broader Context of Special Keys

Microsoft’s initiative follows a historical trend of special keys on keyboards. Apple pioneered this concept in the 1980s with its “Command” key, and Google introduced a search button on its Chromebooks. Google even experimented with an AI-specific key on its now-discontinued Pixelbook. However, Microsoft’s dominant position in the personal computer market, with agreements with major manufacturers like Lenovo, Dell, and HP, gives it a significant advantage. Approximately 82% of all desktop computers, laptops, and workstations run Windows, compared to 9% for Apple’s operating system and just over 6% for Google’s (IDC).

Industry Adoption and Future Prospects

Dell Technologies has already announced the inclusion of the Copilot key in its latest XPS laptops, and other manufacturers are expected to follow suit. Microsoft’s own Surface devices will also feature this key, with several companies anticipated to showcase their new models at the CES show in Las Vegas.

Conclusion

The introduction of the Copilot key by Microsoft is more than just a hardware innovation; it represents a paradigm shift in how we interact with our computers. By embedding AI directly into the keyboard, Microsoft is not only enhancing user experience but also paving the way for more advanced and intuitive computing. As we embrace this new era of AI-integrated computing, it is crucial to address the ethical and legal implications to ensure that this technological evolution benefits all users responsibly.

Continue Reading

Computers

The Future of AI and Quantum Computing: A Realistic Perspective

Published

on

By

In the rapidly evolving landscape of artificial intelligence (AI) and quantum computing, the opinions of industry leaders can significantly influence the direction of technological advancements. Yann LeCun, Meta’s chief AI scientist, recently offered a grounded perspective on these technologies, providing a contrast to the often hyperbolic narratives surrounding AI’s future capabilities and the potential of quantum computing.

AI’s Journey to Sentience: A Long Road Ahead

LeCun, a pioneer in deep learning, expressed skepticism about the imminent arrival of artificial general intelligence (AGI) – AI with human-level intelligence. Speaking at the Viva Tech conference in Paris, he highlighted the limitations of current AI systems, which, despite their ability to process vast amounts of text, lack the common sense necessary for true sentience. This view contrasts with Nvidia CEO Jensen Huang’s assertion that AI will rival human intelligence in less than five years, as reported by CNBC. LeCun’s stance reflects a more cautious and realistic assessment of AI’s current trajectory.

The Hype Around AGI and Quantum Computing

The pursuit of AGI has driven significant investment in AI research, particularly in language models and text data processing. However, LeCun points out that text is a “very poor source of information” for training AI systems to understand basic concepts about the world. He suggests that achieving even “cat-level” or “dog-level” AI is more likely in the near term than human-level AI. This perspective aligns with the broader consensus in the AI community that AGI remains a distant goal.

Multimodal AI: The Next Frontier

Meta’s research into multimodal AI systems, which combine text, audio, image, and video data, represents a significant step forward in AI development. These systems could potentially uncover hidden correlations between different types of data, leading to more advanced AI capabilities. For instance, Meta’s Project Aria augmented reality glasses, which blend digital graphics with the real world, demonstrate the potential of AI to enhance human experiences, such as teaching tennis techniques.

The Role of Hardware in AI’s Future

Nvidia’s graphics processing units (GPUs) have been instrumental in training large language models like Meta’s Llama AI software. As AI research progresses, the demand for more sophisticated hardware will likely increase. LeCun anticipates the emergence of new chips specifically designed for deep learning, moving beyond traditional GPUs. This shift could open up new possibilities in AI hardware development, potentially leading to more efficient and powerful AI systems.

Quantum Computing: Fascinating but Uncertain

LeCun also expressed doubts about the practical relevance of quantum computing, a field that has seen significant investment from tech giants like Microsoft, IBM, and Google. While quantum computing holds promise for certain applications, such as drug discovery, LeCun believes that many problems can be more efficiently solved with classical computers. This skepticism is shared by Meta senior fellow Mike Schroepfer, who views quantum technology as having a long time horizon before becoming practically useful.

A Balanced View on Technological Progress

LeCun’s views offer a balanced perspective on the future of AI and quantum computing, tempering the excitement with a realistic assessment of current capabilities and challenges. As the tech industry continues to explore these fields, it’s essential to maintain a critical eye on the practical implications and timelines of these technologies. The journey towards more advanced AI and the realization of quantum computing’s potential will likely be a long and complex one, requiring sustained effort and innovation.

In conclusion, while the future of AI and quantum computing is undoubtedly exciting, it’s important to approach these fields with a realistic understanding of their current state and potential. As LeCun’s insights suggest, the path to AGI and practical quantum computing is longer and more nuanced than some of the more optimistic predictions imply. The tech industry must continue to push the boundaries of what’s possible while remaining grounded in the realities of technological development.

Continue Reading

Computers

Holography’s New Frontier: Deep Learning Transforms 2D Images into 3D Holograms

Published

on

By

In the realm of visual technology, the quest for more immersive and realistic experiences never ceases. Holography, the science of creating three-dimensional images, has long been a subject of fascination and research. Now, a groundbreaking study led by Professor Tomoyoshi Shimobaba of the Graduate School of Engineering at Chiba University has introduced a novel deep-learning method that simplifies the creation of holograms. This innovation allows 3D images to be generated directly from 2D photos captured with standard cameras, marking a significant advancement in holographic technology.

Traditional holography involves capturing an object’s three-dimensional data and its interactions with light. This process demands high computational power and specialized cameras for capturing 3D images. This complexity has restricted the widespread adoption of holograms, despite their potential applications in various sectors like medical imaging, manufacturing, and virtual reality.

Deep learning has been making waves in the technology sector, and its application in holography is no exception. Previous methods have employed deep learning to create holograms directly from 3D data captured using RGB-D cameras, which capture both color and depth information of an object. This approach has circumvented many computational challenges associated with traditional holography.

The team from Chiba University proposes a different approach based on deep learning that further streamlines hologram generation. Their method employs a sequence of three deep neural networks to transform a regular 2D color image into data that can be used to display a 3D scene or object as a hologram. The first neural network predicts the associated depth map from the color image, providing information about the 3D structure of the image. The second and third neural networks are responsible for generating and refining the hologram, respectively.

One of the most striking aspects of this new method is its speed. The researchers found that their approach outperforms current high-end graphics processing units in speed. Moreover, the method is cost-effective as it doesn’t require expensive equipment like RGB-D cameras after the training phase.

The implications of this research are far-reaching. In the automotive industry, for instance, this technology could revolutionize in-vehicle holographic systems, presenting necessary information to passengers in 3D. The U.S. Department of Transportation has been exploring the potential of such advanced display technologies for enhancing road safety. Additionally, the technology could find applications in high-fidelity 3D displays, heads-up, and head-mounted displays, further augmenting the development of ubiquitous holographic technology.

The introduction of deep learning into the field of holography has the potential to solve many of the challenges that have hindered its widespread adoption. By simplifying the process and making it more cost-effective, this new method could pave the way for holography to become a more integral part of our daily lives, from healthcare to transportation and beyond.

The research, titled “Multi-depth hologram generation from two-dimensional images by deep learning,” was recently published in the journal Optics and Lasers in Engineering.

Continue Reading

Trending