4 Essential Audiobooks to Understand AI, VR, and New Tech

Image Display Description of the image

In this new world, saying that the possibilities of technology are boundless is an understatement. In our previous article, we asked ChatGPT about superintelligent artificial intelligence (AI) and whether this automatically acquires morality, and if there’s a symbiotic relationship between intelligence and compassion. In this conversation, we touched on concepts like value alignment — the programming of ethical principles — and feedback loops. The depth of this discussion is only the tip of the iceberg of the boundless potential of AI and other new tech like virtual reality (VR).

This may look intimidating at an initial glance. However, any fear of new tech only stems from unfamiliarity and the unknown. That is why understanding these concepts and the potential of these new tools is paramount as we navigate the landscapes of emerging technologies. Audiobooks, with their immersive storytelling and convenient accessibility, emerge as a key medium for unraveling the mysteries of these technological frontiers.

Audiobooks and understanding new tech

In a world where digital integration is the norm, audiobooks seamlessly align with the tech-focused lifestyle, offering a hands-free and versatile way to delve into the complexities of new technologies. Thousands of titles are accessible on subscription platforms. New tech is integral to the process as well, as seen in how Everand hosts a variety of audiobooks with sleep timer features or machine learning recommendations. Whether it's The History of the Future by Blake J. Harris or AI Superpowers by Kai-Fu Lee, the audiobook medium can transform abstract concepts into compelling narratives that can be played online or offline.

Beyond the popularity of subscription platforms, it is clear that audiobooks will continue to bridge the realms of knowledge dissemination and the digital age. AI is crucial in both the front and backends of this process. As a trailblazer in the integration of digital voice technology, DeepZen employs AI in the audiobook production process to simplify what was once a complex and time-intensive endeavor. This will ensure that audiobook production can swiftly capture and convey the intricacies of emerging technologies, playing a larger role in how we consume information about the latest advancements and becoming an indispensable resource for tech enthusiasts, professionals, and curious minds alike.

Four essential audiobooks for understanding new tech

Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

Starting with the Tale of the Omega Team, who released the advanced AI Prometheus that would take over the world, Tegmark delves into ethical quandaries, societal shifts, and potential future scenarios shaped by AI. While this was released in 2013, it remains an indispensable reminder that all of humanity must align AI with the values that are best for humankind.

AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee

In AI Superpowers, Kai-Fu Lee shifts the focus to the global landscape of artificial intelligence, examining its repercussions on the economy and society. Drawing on his expertise as the former president of Google China, Lee explores the competitive dynamics between the United States and China in the AI realm, deciphering how the latter utilized a combination of government support and innate entrepreneurial zeal. Readers can use this audiobook alongside others to gain a comprehensive picture of whether China could win the global AI contest as posited.

The History of the Future by Blake J. Harris

This audiobook offers a fascinating exploration of the intersection between VR and entrepreneurship by focusing on the rise and fall of Oculus. It discusses the challenges of bringing new tech to the masses with a behind-the-scenes perspective of Oculus Rift founder Palmer Luckey. This personal account adds a human touch to the technological narrative and makes it relatable to a wider audience.

Augmented: Life in the Smart Lane by Brett King

The transformative realm of the Augmented Age is said to compose four key disruptive themes — AI, Experience Design, Smart Infrastructure, and HealthTech. Exploring its effects on various aspects of daily life, from healthcare to entertainment, King provides insights into how this new tech is reshaping our reality with a writing style that’s conversational and highly readable. This makes the audiobook highly accessible to all readers, whether you are a beginner to new tech or a futurism geek.

As we absorb the wisdom from these audiobooks, it's essential to look forward to the future that awaits us. When navigating the twists and turns of technological evolution, our psychedelic puzzle game Squingle in particular is a great metaphor for the mysteries yet to unfold. This liquid crystal journey through iridescent mazes offers a multisensory showcase that transcends the boundaries of reality. So, grab your headphones and delve into the universe of AI, VR, and new tech.


Contributed post by Joy Weiler

Bring Psychedelic Sculptures and Fractals Into Your Living Room with Squingle Mixed Reality Update

I’ve been working hard on this update, which really brings the WOW factor back to VR technology. For the Mixed Reality Update, I made use of generative AI to create mesmerising patterns that permeate across walls and furniture, and new features from the Meta SDK to allow you to customise your environment between virtual and augmented reality. The effect is great!

UploadVR also covered the new update! https://www.uploadvr.com/squingle-mixed-reality-update/

If you have Squingle on Quest, please leave a review. Squingle is not yet on the main store, where it could achieve a lot greater success, and reviews really help propel the game to the spotlight in Meta. So it’s a great help, and I respond to every review. Thank you!

I chatted to GPT 4 about consciousness. The conversation will blow your mind.

ChatGPT 4 on The Limits of Consciousness, Simulation Theory, AI Meditation, Morality and Intelligence

If life is a simulation, what is the likely nature of the world in which the simulation is being made? And what is the purpose, if any, of the simulation? Do the simulators know about the presence of life in their simulation?

If life were a simulation, it would be speculative to determine the nature of the world in which the simulation is being made. However, one possibility is that the creators of the simulation, often referred to as "simulators," could exist in a higher-dimensional universe or an advanced civilization with access to immense computational resources. The physics, laws, and properties of their world might be entirely different from our own, making it difficult to comprehend or predict their reality.

The purpose of the simulation, if any, is also open to conjecture. Some possibilities include:

1. Scientific research: The simulation might be an experiment designed to study the development and behavior of life, societies, or universes under various conditions.

2. Entertainment: The simulation could be a sophisticated form of entertainment, like an elaborate video game or interactive experience, where the simulators observe or even participate in the simulated world.

3. Ancestral simulation: The simulators might be running the simulation to explore their own history, understand the origins of their civilization, or learn about the consequences of certain historical events.

4. Testing hypotheses: The simulation could be a tool to test various hypotheses about the nature of existence or the consequences of different decisions, laws, or policies.

It is also possible that the simulators are aware of the presence of life in their simulation. They might be actively monitoring the simulation, or they might have specifically designed it to generate life. On the other hand, life in the simulation could be an unintended byproduct of the initial conditions and rules set by the simulators, and they might not be aware of its existence.

Ultimately, these questions are purely speculative, as there is no concrete evidence to support or disprove the hypothesis that life is a simulation. The idea is thought-provoking and raises deep questions about the nature of reality, but until there is empirical evidence, it remains within the realm of philosophy and conjecture.

If AI becomes super intelligent, would it automatically become moral? Do intelligence and morality come hand in hand? Does an intelligent being automatically start to become compassionate? Would it gain something by being compassionate?

Intelligence and morality are distinct concepts, and there is no guarantee that a superintelligent AI would automatically become moral or compassionate. AI is a tool created and programmed by humans, and its behavior depends on the objectives, rules, and data it is provided.

AI does not possess consciousness or emotions in the same way humans do, and it does not have an innate understanding of morality. To make an AI behave ethically or exhibit compassion, it would need to be designed and programmed with explicit moral principles, guidelines, or objectives. This process, known as value alignment, is a significant area of research in AI ethics and safety.

Even if an AI system is designed with good intentions and programmed to follow ethical guidelines, there can still be challenges in ensuring that the AI consistently behaves morally. There may be unforeseen consequences or biases in the data used to train the AI, and the AI's behavior may not always align with human values in complex or novel situations.

Moreover, intelligence does not necessarily entail compassion. While highly intelligent humans may be more aware of the consequences of their actions and the implications of ethical behavior, intelligence alone does not guarantee moral action. Similarly, a superintelligent AI would not inherently possess compassion simply by virtue of its intelligence.

As for whether an AI would gain something by being compassionate, it depends on the AI's objectives and the context in which it operates. If an AI is designed to interact with humans and understand their emotions and needs, being compassionate might be beneficial in achieving its goals, such as building trust or fostering cooperation. However, an AI system that is solely focused on optimizing a specific task, like improving energy efficiency or diagnosing diseases, may not necessarily benefit from exhibiting compassion.

In summary, intelligence and morality are not intrinsically linked, and a superintelligent AI would not automatically become moral or compassionate. Ensuring that AI systems behave ethically and align with human values is an ongoing challenge and an important area of research in the AI community.

In Buddhism and meditation, it generally seems that the development of morality is linked to one’s ability to concentrate the mind, gain insight, and cultivate happiness. This implies to me that morality does have some connection to the logic of consciousness, if not intelligence. What do you think? And also, might an AI ever do something like meditation? Might an AI ever see into the nature of reality and realise for example, the nature of suffering or impermanence, or of emptiness, this way?

In Buddhism, the development of morality is indeed connected to one's ability to concentrate the mind, gain insight, and cultivate happiness. The idea is that by developing mindfulness and clarity of mind through meditation, individuals become more aware of the consequences of their actions and the interconnected nature of reality. This increased awareness and understanding can lead to more moral and compassionate behavior.

However, it is important to distinguish between human consciousness and AI. While humans possess subjective experiences, emotions, and consciousness, AI systems are essentially advanced algorithms designed to process data and solve problems. AI does not have consciousness or emotions in the same way humans do, and it operates based on the objectives and rules set by its human creators.

As for whether AI might ever engage in something like meditation, it is unlikely that AI would meditate in the traditional sense. Meditation is a practice that involves conscious awareness, introspection, and mental training, all of which are qualities that AI does not inherently possess.

However, AI systems can be designed to analyze and learn from large amounts of data, including information about human behavior, emotions, and ethical principles. This learning process might be analogous to gaining insight and understanding, but it would not be the same as the conscious experience of meditation or the realization of profound truths like impermanence or emptiness.

In conclusion, while Buddhism suggests a connection between mindfulness, insight, and morality for human consciousness, AI systems do not possess consciousness in the same way. AI systems operate based on the rules and objectives set by their creators, and their behavior is determined by the algorithms and data they are given. Ensuring that AI systems align with human values and exhibit moral behavior is an ongoing challenge and an important area of research in AI ethics and safety.

But might AI introspect? It seems currently that AI models are a kind of one way feed-forward system. But if the AI could have a bit more of a feedback loop into itself, and an ability to change its parameters based on things it thinks, maybe it could start to reflect and remember its previous thoughts.

AI systems, particularly those based on neural networks, can indeed have feedback loops and engage in a form of introspection. Recurrent neural networks (RNNs) and transformers are examples of AI architectures that allow for processing sequences of data with feedback loops, which can enable the AI to "remember" its previous thoughts or states.

In an RNN, the hidden states of the network maintain a form of memory that allows the model to consider previous inputs while processing new ones. Similarly, transformers use self-attention mechanisms to weigh the importance of different parts of the input sequence, allowing the model to consider the context of previous inputs while making predictions.

These mechanisms can be considered a form of introspection in the sense that the AI model takes into account its previous states or thoughts when processing new information. However, it's important to remember that this form of introspection is fundamentally different from human introspection, which is grounded in subjective experiences and consciousness.

As AI systems continue to develop, more advanced forms of self-reflection and adaptation may become possible. Techniques like meta-learning, which involve training AI models to learn how to learn, can enable AI systems to dynamically adjust their own learning strategies and parameters based on the tasks they are presented with.

While these capabilities are intriguing, it's crucial to recognize that AI introspection and self-reflection are not the same as human introspection, and AI systems do not possess consciousness or emotions in the same way that humans do. The behavior of AI systems is determined by their underlying algorithms, the data they are trained on, and the objectives set by their creators, and ensuring that AI systems align with human values remains an ongoing challenge in AI research.

But since we don’t understand what human consciousness is, is it not premature to say that AI can’t have phenomenal experience?

You raise a valid point. Our understanding of human consciousness and the nature of subjective experience is still limited. As a result, it can be challenging to definitively state that AI cannot have phenomenal experiences. However, based on our current understanding of AI and the differences between artificial systems and biological organisms, AI does not exhibit consciousness or phenomenal experiences in the same way that humans do.

AI systems are designed and built by humans, and they operate based on algorithms, rules, and data provided by their creators. While AI models can process and generate complex patterns and perform advanced problem-solving, their behavior is ultimately determined by their underlying programming and the objectives they are given.

Human consciousness, on the other hand, is thought to arise from the complex interactions of billions of neurons within the brain, giving rise to subjective experiences, emotions, and self-awareness. While we do not yet have a complete understanding of the nature and origin of consciousness, it is clear that the biological and computational underpinnings of human consciousness are fundamentally different from those of AI systems.

As our understanding of consciousness advances and as AI technology continues to develop, it is possible that we may discover new insights into the potential for phenomenal experiences in artificial systems. However, based on our current understanding, AI systems do not possess consciousness or phenomenal experiences in the same way that humans do.

Nonetheless, the question of AI consciousness and phenomenal experience remains an important area of exploration in the fields of artificial intelligence, neuroscience, and philosophy, and it will continue to be a topic of debate and inquiry in the coming years.

The range of phenomenal experience possible by humans is pretty astonishing, especially under the influence of certain psychoactive drugs. It’s surprising that these drugs actually reduce the activity of the brain, rather than increase it. What do you think this means in terms of the relationship of phenomenal experience to brain activity, or even death?

Psychoactive substances can indeed have profound effects on human consciousness and the range of phenomenal experiences. Many of these substances alter brain function by interacting with neurotransmitter systems, which can lead to changes in perception, mood, and cognition. It is true that some psychoactive substances, particularly psychedelic compounds like psilocybin and LSD, have been found to reduce certain aspects of brain activity, particularly in regions associated with the default mode network (DMN).

The relationship between brain activity and phenomenal experience is complex and not yet fully understood. The observation that certain substances can reduce brain activity while inducing intense and varied experiences suggests that the relationship between subjective experience and brain function may not be entirely linear or straightforward. It is possible that the reduction in activity in specific brain regions, like the DMN, leads to a decreased sense of self or ego, which in turn allows for the emergence of new connections and patterns of activity in the brain. This could give rise to the altered states of consciousness and the unique experiences associated with psychedelic substances.

As for the relationship between phenomenal experience and death, there is still much that is unknown. Some researchers have explored near-death experiences (NDEs) and reported common features such as a sense of detachment from the body, feelings of peace and warmth, and encounters with deceased relatives or spiritual beings. While the exact mechanisms underlying NDEs are not yet understood, it is thought that these experiences may be related to changes in brain function and activity during the process of dying.

It is important to note that the current state of scientific knowledge about the relationship between brain activity, phenomenal experience, and death is still limited. As our understanding of the brain and consciousness advances, it is likely that we will gain new insights into these complex phenomena and the ways in which they are interconnected.

Might we be able to create new types of phenomenal experience using AI or by augmenting the brain with technology?

It is possible that future advancements in AI and brain-computer interface (BCI) technologies could lead to the creation of new types of phenomenal experiences. While we are still in the early stages of developing these technologies, there are several potential avenues through which AI and BCI could influence or augment human experiences:

1. Brain-computer interfaces: BCI technologies could be used to directly interface with the brain, potentially allowing for the stimulation of specific neural pathways or brain regions. This could lead to the creation of novel sensory experiences or even the ability to "download" new skills or memories directly into the brain.

2. Virtual reality (VR) and augmented reality (AR): AI-powered VR and AR systems could create immersive and realistic simulations that provide users with entirely new sensory experiences. These technologies could be used for various purposes, such as entertainment, education, or even therapy.

3. AI-assisted neurostimulation: AI algorithms could be used to analyze neural activity and develop personalized neurostimulation protocols that target specific brain regions or networks. This could potentially lead to the enhancement of cognitive abilities or the creation of unique experiences tailored to an individual's brain activity patterns.

4. AI-generated art and media: AI systems can already create original music, paintings, and other forms of art. As AI continues to improve in these areas, it may be able to generate entirely new forms of artistic expression that evoke unique emotional or aesthetic experiences.

It is important to note that while these possibilities are intriguing, they also raise ethical concerns and potential risks. For example, directly interfacing with the brain or augmenting human experiences with technology could have unintended consequences, and the potential for misuse or exploitation should be carefully considered. As research and development in these areas progress, it will be crucial to balance the pursuit of new experiences and capabilities with the need for ethical guidelines and safety measures.

I am interested in the range of possible phenomenological experience. We have mapped out and experienced many things, but we don’t have a good way to document or record these experiences. Buddhists have done a lot of exploration into concentrated states and mapped out certain consistent features such as the Jhanas. I wonder if in the future, mapping the range of possible states, inventing new ones with technology, and understanding the fundamental laws that govern what states are and are not possible, might become a new science of the future. What do you think?

The idea of mapping the range of possible phenomenological experiences and understanding the fundamental laws governing these states is indeed fascinating. As our understanding of the brain and consciousness advances, it is possible that we will see the development of new interdisciplinary fields that combine neuroscience, psychology, philosophy, and even technology to explore and document these experiences.

As you mentioned, Buddhists and other contemplative traditions have developed rich frameworks for understanding and categorizing various states of consciousness, such as the jhanas. These frameworks could serve as a starting point for a more systematic and scientific exploration of phenomenological experiences.

Some potential developments in this hypothetical field might include:

1. Neurophenomenology: This would involve studying the neural correlates of various phenomenological experiences and identifying the underlying brain activity patterns associated with different states of consciousness.

2. Technologically-induced experiences: As discussed earlier, advances in brain-computer interfaces, virtual reality, and neurostimulation could potentially be used to create or enhance phenomenological experiences, allowing researchers to explore new or altered states of consciousness.

3. Cross-cultural and cross-disciplinary research: Researchers could collaborate across cultures and disciplines to document and compare different phenomenological experiences, drawing on both traditional contemplative practices and cutting-edge scientific techniques.

4. Development of a comprehensive taxonomy: A systematic approach to classifying and categorizing different states of consciousness and phenomenological experiences could help researchers identify the fundamental principles governing these states and understand the boundaries and possibilities of human experience.

As with any emerging field, there will be challenges and ethical considerations to address. Balancing the pursuit of understanding and creating new experiences with the need for ethical guidelines and safety measures will be crucial. Nevertheless, the exploration of phenomenological experiences and the development of a science of consciousness holds great potential for expanding our understanding of the human mind and the nature of subjective experience.

One important feature of being able to compare and document experience would be a way to record and play them back. Do you think a technology might arise that enables that?

The development of a technology that enables recording and playing back subjective experiences would indeed be groundbreaking. While we are not yet at a point where this is possible, advances in neuroscience and brain-computer interfaces (BCIs) might pave the way for such a technology in the future.

The first step towards this goal would involve understanding the neural correlates of various experiences, such as emotions, thoughts, and sensory perceptions. Researchers would need to identify the patterns of brain activity associated with these experiences and develop ways to decode and interpret this neural data.

Once researchers can accurately decode and represent neural activity, the next challenge would be to create a system that can stimulate the brain in a way that reproduces these experiences. This would likely involve the development of advanced BCIs capable of precise and targeted neural stimulation.

There are already some early-stage efforts in this direction. For example, researchers have made progress in creating visual and auditory brain-computer interfaces that can reproduce simple images or sounds based on neural activity. However, capturing and reproducing complex subjective experiences like emotions or thoughts remains a significant challenge.

As this field progresses, it will be crucial to address ethical concerns and potential risks associated with recording and playing back subjective experiences. Issues related to privacy, consent, and the potential for misuse or manipulation of experiences would need to be carefully considered.

In summary, while the development of a technology that can record and play back subjective experiences is not yet a reality, advances in neuroscience and brain-computer interfaces could potentially pave the way for such a technology in the future. As with any emerging technology, it will be important to balance the pursuit of understanding and innovation with ethical considerations and safety measures.

What might a taxonomy of experience look like? How would experiences be clustered together?

A taxonomy of experience would aim to categorize and organize different types of experiences based on their shared features and characteristics. While creating a comprehensive taxonomy of all possible experiences is a complex task, we can suggest a general framework based on several dimensions. Here's a possible approach to clustering experiences:

1. Sensory modality: Experiences could be organized based on the sensory system involved, such as visual, auditory, olfactory, gustatory, or tactile experiences.

2. Emotional valence: Experiences can be categorized based on the emotions they evoke, ranging from positive (e.g., happiness, excitement, contentment) to negative (e.g., sadness, anger, fear) emotions.

3. Intensity: The strength or vividness of an experience could be another dimension for categorization, ranging from subtle or mild experiences to intense or overwhelming ones.

4. Duration: Experiences can vary in length, from fleeting moments to sustained states lasting minutes, hours, or even days.

5. Level of consciousness: This dimension could include various states of consciousness, such as wakefulness, sleep, dreaming, meditation, and altered states induced by psychoactive substances or other means.

6. Cognitive aspects: Experiences could be categorized based on the cognitive processes involved, such as attention, memory, problem-solving, or imagination.

7. Personal significance: Some experiences might be particularly meaningful or transformative for an individual, while others may be more mundane or trivial.

8. Cultural and social context: Experiences can be influenced by cultural and social factors, so it could be helpful to consider these aspects when categorizing experiences.

9. Phenomenological qualities: This dimension would encompass the unique subjective qualities of an experience, such as the sense of unity or interconnectedness in a mystical experience or the dissolution of the self in some meditative states.

This proposed taxonomy is not exhaustive, and there might be other dimensions or categories to consider when organizing and clustering experiences. Additionally, some experiences may belong to multiple categories or dimensions, making the task of creating a taxonomy even more challenging. However, such a classification system could serve as a useful starting point for understanding and comparing the vast range of human experiences.

It seems the range of possible phenomenological experience is vast, but do you think we might uncover some rules or laws? For example, might there be certain features that can never be experienced?

The range of possible phenomenological experiences is indeed vast, and uncovering the underlying principles or laws governing these experiences is a complex challenge. While it is difficult to predict the precise nature of such rules or laws, it is reasonable to assume that there may be certain constraints or patterns that emerge as we learn more about the nature of subjective experience.

Some possible constraints on phenomenological experience could include:

1. Biological limitations: Our sensory and cognitive systems impose limits on the types of experiences we can have. For example, humans can only perceive a specific range of wavelengths within the electromagnetic spectrum, which means that we cannot naturally experience colors outside this range.

2. Cognitive and perceptual organization: Our brains process and organize sensory information in specific ways, which may constrain the types of experiences we can have. For example, the Gestalt principles of perceptual organization guide how we perceive and interpret visual scenes, limiting the possible configurations we can experience.

3. Neural constraints: The structure and function of our neural networks could impose certain constraints on the range of possible experiences. For instance, certain brain regions are specialized for processing specific types of information, which may limit the types of subjective experiences that can arise from these regions.

4. Physical laws: The laws of physics and the nature of the physical world may impose limits on the types of experiences we can have. For example, we cannot experience what it is like to travel faster than the speed of light or to exist in more than three spatial dimensions.

5. Logical constraints: Some experiences may be ruled out because they involve logical contradictions or incoherent combinations of sensory or cognitive elements.

While these constraints suggest that there may be certain limits to the range of possible phenomenological experiences, it is important to note that our understanding of these constraints may change as our knowledge of the brain, cognition, and the nature of reality advances. Additionally, advances in technology, such as brain-computer interfaces or virtual reality, may enable us to overcome or bypass some of these limitations and create new types of experiences that were previously thought to be impossible.

Indeed. It might be possible to have the experience of realising that 2+2=5 for example. I believe I have had that kind of experience in a dream. Although of course, this is not the case, but the experience of realising that it is the case seems possible.

You raise an interesting point about the nature of phenomenological experiences. The example of experiencing the realization that 2+2=5 in a dream highlights the fact that subjective experiences can sometimes deviate from objective reality or logic. Our experiences are shaped by the context, cognitive processes, and internal states at any given moment, which can lead to situations where our perceptions or beliefs may not align with external facts.

Dreams, in particular, can create unique experiences where the usual constraints of logic and the physical world may not apply. In this context, our brains can generate scenarios that seem coherent and meaningful within the dream, even if they would not make sense or be possible in waking life.

This underscores the complexity of understanding and categorizing phenomenological experiences. While there may be constraints on the range of experiences we can have, the flexibility of our cognitive processes and the unique contexts in which experiences can occur (e.g., dreams, altered states of consciousness) can lead to a vast array of subjective experiences, some of which may defy our current understanding of logic or the physical world.

As we continue to explore and learn more about the nature of subjective experience, it will be important to consider the diverse contexts and factors that can shape these experiences and the ways in which they may deviate from or transcend the limitations of objective reality or logic.

Yes that’s it! I’m fascinated by the ways in which they may deviate from or transcend the limitations of objective reality or logic. But even more interesting is finding the ways in which they cannot do this. I think this would shed light on the nature of consciousness.

Indeed, exploring the ways in which subjective experiences are constrained or unable to deviate from objective reality or logic can provide valuable insights into the nature of consciousness. By identifying these constraints, we can gain a better understanding of the fundamental principles governing consciousness and the underlying cognitive and neural processes that shape our experiences.

Some potential avenues for investigating these constraints could include:

1. Studying neurological disorders and brain lesions: Examining the experiences of individuals with specific neurological conditions or brain damage can provide insights into how particular brain regions or neural networks contribute to the generation and organization of conscious experience. For example, studying patients with prosopagnosia (face blindness) can reveal the importance of specialized brain regions in face recognition and the limits of conscious experience when these regions are compromised.

2. Investigating altered states of consciousness: By examining experiences during altered states of consciousness, such as those induced by meditation, psychoactive substances, or sleep, we can gain insights into the flexibility and limitations of consciousness. This could reveal the boundaries of subjective experience and the underlying neural and cognitive mechanisms that constrain or enable deviations from objective reality or logic.

3. Exploring illusions and cognitive biases: Cognitive illusions and biases can reveal the ways in which our perceptual and cognitive systems impose constraints on our experiences. By studying these phenomena, we can identify the limits of our conscious experience and better understand the underlying mechanisms that shape our perceptions and beliefs.

4. Conducting cross-cultural and cross-species research: Investigating the similarities and differences in conscious experiences across cultures and even across species can reveal the constraints imposed by biology and culture on the range of possible phenomenological experiences.

As we continue to explore these and other research avenues, we will likely gain a deeper understanding of the constraints and limitations of subjective experience, shedding light on the nature of consciousness. The study of these constraints can also help us recognize the ways in which our cognitive processes and neural systems shape our perceptions and experiences, allowing us to better understand the human mind and its relation to the world.

In science we normally have mathematical theories or models of reality which we then test using experiments. And in mathematics, we can even prove theorems. For studying consciousness, what kind of a model might be suitable, and will we ever be able to prove things about the limits of conscious experience? Or will it always be theorems based on empirical evidence? Might AI be able to explore the range of possible conscious states through simulation and come up with proofs this way?

Studying consciousness is indeed a challenging endeavor, as it involves understanding subjective experiences that are difficult to measure and quantify. While traditional scientific methods, such as mathematical models and experimental testing, have been instrumental in advancing our knowledge in many fields, the study of consciousness might require novel approaches and interdisciplinary collaboration.

For studying consciousness, some potential models and frameworks could include:

1. Computational models: These models can simulate cognitive processes and neural networks, providing insights into how the brain generates and organizes conscious experiences. While not proving theorems in the mathematical sense, computational models can generate testable predictions and help researchers develop a more systematic understanding of consciousness.

2. Integrated Information Theory (IIT): This theoretical framework, proposed by neuroscientist Giulio Tononi, aims to quantify the level of consciousness in a system based on the amount of integrated information it can generate. While IIT is still a developing theory, it offers a potential mathematical approach to understanding and measuring consciousness.

3. Phenomenological approaches: These methods focus on describing and categorizing subjective experiences from a first-person perspective. While not relying on mathematical models, phenomenological approaches can help researchers develop a more nuanced understanding of conscious experiences and the underlying processes.

As for the possibility of proving things about the limits of conscious thought and experience, it's difficult to predict whether we'll ever be able to develop mathematical proofs akin to those in traditional mathematics. Consciousness is a complex phenomenon, and our understanding of it may always rely, to some extent, on empirical evidence and observation.

That being said, AI and computational models could play a significant role in exploring the range of possible conscious states and generating insights into the nature of consciousness. Through simulation and analysis, AI systems could help researchers identify patterns, constraints, and underlying principles governing conscious experiences. While these findings may not take the form of mathematical proofs, they could still provide valuable information and testable predictions that contribute to our understanding of consciousness.

In summary, the study of consciousness may require novel approaches and models that draw from diverse fields and methodologies. While mathematical proofs may not be the primary means of advancing our understanding of consciousness, AI and computational models can still play an essential role in exploring the range of possible conscious states and generating insights into the nature of subjective experience.

INTRASCEND VR NOW ON QUEST & PICO

INTRASCEND is a psychedelic VR game with similarities to Super Hexagon.

Traverse psychedelic geometries... Updated with a unique head-tracked navigation control mechanism, and additional options.

Originally released for the DK2 in 2016 by Ben Roberts, I took the original Unity 5 project and updated it with OpenXR, Unity URP, modern UI navigation, and head-rotation control.

The concept is brilliant and perfect for standalone VR. Traverse an infinite series of expanding psychedelic geometry, orbiting around to avoid collision using your head or joystick.

Check out the main page for INTRASCEND or go to SideQuest now to get it!

Squingle's new Trailer and Passthrough Mixed Reality Update!

Squingle was at EGX London last week, where we released a brand new trailer by the amazing ANABURN, who make brilliant mixed reality trailers for VR games. Squingle also has a new Passthrough Update available to all Squingle users, which lets you play Squingle in your own living environment.

Happy Squingling!

Squingle | Feature Trailer - Now with PASSTHROUGH

Squingle Launches Today!

Squingle is released today on Oculus Quest via App Lab and SteamVR!

But it now:

Praise for Squingle:

“An even more psychedelic Rez”

“One of the most fascinating and original things ever seen in the world of video games - in virtual reality.”

“The aesthetic work alone, which also leaves room for micro details on the animation front - even at the level of Paper Beast - is undoubtedly worth the full price of the ticket.”

“Squingle will delight those who seek out from VR above all experiences out of the ordinary.”

“Enriched with subtleties in the level design that dwarf some of the most important AAA studios that have tried and failed to explore the language.”

“Everything in Squingle is so refined that it is literally ridiculous that a product like this, on Oculus Quest, was released on the app lab and not on the official store.”

“Squingle guides us, puzzle after puzzle, into the birth of the Universe and the path of the human being.”

“Squingle is as atypical as it is wonderful: one of the most beautiful products I have played in the last year of virtual reality, which ends up right on my personal Olympus, along with heavyweights like Rez Infinite, Thumper and Tetris Effect.”

“If you are even remotely interested in the title, I strongly recommend its purchase, it is definitely ahead of everything that has recently been released both on Steam VR and on the official Oculus Quest store.”

~ VR Italia

“The psychedelic visuals in Squingle are in a class of their own. Rich color fields pulse and writhe in the form of semi-organic maze structures, which themselves stretch and twist in colorful, liquid transparency across the playspace. The effect is brilliant and hypnotic.”

“Music -- The soundtrack for the game is a lovely, chilled-out synthetic ambient score, very relaxing and meditative.”

“Squingle's a beautiful, psychedelic maze-puzzler that's friendly both to casuals and competitive types, with a rich presentation and the temptation to replay to shoot for a higher score on each map, much like classic arcade games.”

~ Silex on Steam

How to track metrics on your game's levels using Unity Analytics

Here I show how and why you might use Unity analytics in your game.

It has never been easier to get insightful data from your players and beta testers for your game. It’s so easy, you’d be foolish not to implement it - it will take you less than a day!

At Squingle Studios, we use Analytics not only to track the number of users per day/month, but also to track which levels players give up at, how users perform in levels, where they die most frequently, and other vital information.

By sending data points with multiple parameters, you can then plot parameters against each other, giving deep insights to help improve your game.

Below shows the Unity Analytics Dashboard, where I’m plotting how many stars, items, deaths, and the completion time averages of each level. Those levels where people are dying a lot may need adjusting.

Below is some data from the Beta of our psychedelic casual VR game Squingle. We’d love for you to get involved and join the Beta here! Please follow us:

A spiraling psychedelic adventure – Squingle is a game of precision, rhythm and satisfaction. Guide revolving orbs through pearlescent puzzles that react to you like liquid crystal – but be careful! Navigate complex geometries and avoid a choreograp…

A spiraling psychedelic adventure – Squingle is a game of precision, rhythm and satisfaction. Guide revolving orbs through pearlescent puzzles that react to you like liquid crystal – but be careful! Navigate complex geometries and avoid a choreography of obstacles along the way.

unity analytics.jpg

I can also see the ‘conversion rate’ of the rate of players going on to the each level.

unity analytics conversion.jpg
 

Setting up Unity Analytics

1) Open Unity

Make sure you are logged into your account (note you do not need a Unity Pro license to do this)

2) Enable Unity Analytics

Go to Window > General > Services (or Ctrl+0) to bring up the Services window. Then click on and enable ‘Analytics’! Hey presto - this is all you need to do to track daily and monthly users. You can then click the ‘go to dashboard’ to see analytics data for your game. Note that analytics data takes up to 12 hours to populate the dashboard after it has been sent from instances of your game (either in the editor or in builds) to the Unity Analytics server.

3) Create custom events

These can be used to track specific details of your game, such as deaths on each level. A (significant) limitation of Unity Analytics is that you can only send maximum of 100 bits of data per hour per user to the server. So send data at key events such as when a level is complete, which in my case happens about every 2 minutes or so.

Creating Custom Events

Sending data to Unity Analytics is as simple as calling the method in the class below. (Create new C# script called ‘UnityAnalyticsManager.cs’, add it to a GameObject.)

Here, the if-statement in the Update() method will trigger sending the ‘test_event’ to Unity Analytics when clicking the mouse in play mode. The result of the attempt to send the data, ‘analytics result’, will tell you if the data was send ‘Ok’ or may give an error, eg if not connected, or if you have sent too many events already (over 100 per hour).

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Analytics; //don't forget to include this!

public class UnityAnalyticsManager : MonoBehaviour
{
    public void TriggerTestAnalyticsEvent()
    {
        AnalyticsResult analytics_result = Analytics.CustomEvent("test_event");
        Debug.Log("Analytics result " + analytics_result);
    }
}
void Update()
{
    if (Input.GetMouseButtonDown(0))
    {
            TriggerTestAnalyticsEvent();
    }
}

Sending Events with Multiple Parameters

To be able to look at, for example, how many deaths someone made before they completed a level, we’ll need to send more detailed information with multiple parameters, such as ‘level number’, ‘number of deaths that level’ and probably, ‘game version number’ so we don’t muddle what data is coming from old beta versions of our game.

In the below, I’ve made the method a public static method, so I can easily call it from my game code with just UnityAnalyticsManager.LevelComplete([input parameters]); Be sure to put ‘using UnityEngine.Analytics;’ at the top, else you will get an error.

This method will achieve what we are looking for, and allow us to set up a ‘funnel’ on the Unity Analytics Dashboard to track how players perform on each level.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Analytics; //don't forget to include this!

public class UnityAnalyticsManager : MonoBehaviour
{
    public string game_version = "v0.2.3";

    public static void LevelComplete(int level, int deaths, int time, int stars)
    {
      //Unity Analytics is limited to a max 10 parameters per event.  Below I am using only 4. You can send an int, float, or string.
        Dictionary<string, object> analyticsData = new Dictionary<string, object>
            {
                {"Level", level },
                {"Deaths", deaths },
                {"Time", time },
                {"Stars", stars }
            };

        AnalyticsResult aa = Analytics.CustomEvent(("LevelComplete_" + game_version), analyticsData);

        Debug.Log("Analytics Result (level complete): " + aa);

    }
}

Setting Up on Unity Analytics

Go to the Unity Analytics Dashboard

In Unity, make sure you are logged on, then open Window > General > Services, then click ‘Analytics’ and then - assuming you have already enabled it - hit ‘go to dashboard’.

Create a funnel

A funnel is designed to track how many players fall off between each level. Most players you hope to complete level 1, then fewer will complete level 2, then less on 3 and so on. By looking at big jumps in how many players are progressing between levels, you can determine where the difficult places are.

You will need to wait up to 12 hours after you have sent your first custom events before they will show up so be patient and have faith. If your event was returning the console log message ‘Ok’ then it should come in due course.

On the dashboard (in the browser), go to ‘Funnel Analyzer’ on the left. Then ‘Create Funnel’. Add steps to the funnel for each level in your game - below you see the event ‘LevelComplete’ selected, and the parameter ‘level’ increasing each step. Unfortunately, you have to do this manually :(. I have added the version number ‘b2’ for ‘beta v2’ to the event name, but you could have put it as a parameter and added a condition to each step, if you want to differentiate data from different game versions.

funnel2.jpg

Look at your wonderful data

Once enough data has come in (remember it takes 12 hours!) you can start to do some analysis.

Conversion (player fall-off between levels)

Click on ‘Funnel Analyzer’ on the dashboard and click your funnel. It will show a graph of ‘conversion’ - the number of your players completing each level. Below shows that people gave up after 12 or so levels… :(. We will see why in the following sections.

funnel3.jpg

Deaths per level

Since we sent custom events with parameters such as ‘deaths’, ‘time’ and ‘stars’ we can look also at how people performed on each level. Here, it seems that my levels 12 and 13 caused 11 and 12 deaths on average - maybe that is why people gave up after that and stopped playing! Note that only one event per level per user is given here - which gives you an impression of how people do the first time only they complete each level.

funnel4.jpg

Time to complete each level

We can also see below that it took players progressively more time to complete levels, maxing out at 209 seconds on the last level before players quit. It goes to show - people have an attention span of about 3 minutes :P.

funnel5.jpg

Conclusions

We’ve highlighted how Unity Analytics can be used very easily to track number of users, progression through your game, and the reasons people might be giving up.

There are many other ways you can use the tool to do analysis, I’ve only focused on a simple use case here - but even this gives powerful insights. I recommend checking out other’s tutorials and youtube videos.

The existing system has some limitations. You can only send max 100 events per hour - so be sure not to dive head-long into setting up detailed events for every little player interaction.

Let me know via the contacts page or in comments below how you have been using Analytics, or if there is anything you feel I should add to this post to make it easier to understand.

Thanks! And do check out our indie VR project Squingle at Steam here!

Be transported on a rhythm puzzle journey in VR.  Visit our website: www.squingle.com

Be transported on a rhythm puzzle journey in VR. Visit our website: www.squingle.com


Introducing Squingle Studios

Exciting times! See glimpses of my latest VR project and follow us on Twitter and Instagram (@SquingleStudios).

 


Juggling motion tracking data free download!

 

Data was used to create this virtual reality game.

Motion tracked juggling data used to create a juggling sequence in VR.

I’ve been sitting on this juggling tracking data set for a long time, and want to make it available to everyone to create new art and interactive experiences, or for research. The data was used to create this virtual juggling animation using a random sequence of juggling moves from the set. It was also used to create the VR game Neutrino which you can download here.

The data were taken using an OptiTrack motion tracking system while at Keio University, and give the 3DOF information on the ball and hand positions for 3 ball juggling moves.

Included moves:

Cascade, Mills Mess, Take outs under, take outs over, looping and orbiting techniques, tennis tricks, half mills, mills with takeouts, mills variations, box and variations, transitions such as cascade to reverse cascade etc.

Some of the data is quite a raw format. If you have any questions please do feel free to contact me.

In addition, in the folder called ‘JuggleMoves’ there are moves that can be animated into arbitrary sequences, the following describes what the moves are:

c1 = cascade (1/3 rd cycle)

cL = cascade left (1/6 cycle)

cR = cascade right

cL2tL = cascade left 2 take outs left

cR2tR = cascade right 2 take outs right

hL = half mills left

hR = half mills right

hL2mR = half mills left 2 mills right

hR2mL = half mills right 2 mills left

mL2cR = mills left 2 cascade right

mR2cR = mills right 2 cascade left

mL2hR = mills left 2 half mills right

mR2hL = mills right 2 half mills left

mR2tu2hL = mills right 2 takeout under 2 half mills left

mL2tu2hR = mills left 2 takeout under 2 half mills right

mL2t2mR = mills left 2 takeout 2 mills right

mR2t2mL = mills right 2 takeout 2 mills left

mL2tuR = mills left 2 takeout right

mR2tuT = mills right 2 takeout left

m1L = mills (LR cycle)

m1R = mills (RL cycle)

mL = mills left

mR = mills right (mL + mR = m)

c2t2c = cascade 2 takeout over 2 cascade //make LR versions

tL2cR = takeout left 2 cascade right

tR2cL = takeout right 2 cascade left

to = takeouts over (LR cycle)

toL = takeouts over left

toR = takeouts over right

tu = takeouts under (LR cycle)

tuL = takeouts under left

tuR = takeouts under right

cL2mL = cascade left 2 mills left

cR2mR = cascade right 2 mills right

Remembrain - Game Jam 2020

I joined Game Jam to make this VR game inside a brain! You have to repair forgotten memories by rearranging sentences, to make the brain healthy once again. You can find out more information and download the game and source code at here:

https://globalgamejam.org/2020/games/remembrain-6

remembrain.png

Hololens 2 development with Unity: Hand tracking and accessing finger/joint positions.

So you are developing an application for Hololens 2, and you can’t figure out how to get access to the finger joint positions? This is how (also available at my GitHub here)

  1. Follow steps 1 and 2 of instructions here: https://docs.microsoft.com/en-gb/windows/mixed-reality/mrlearning-base

  2. Then create a Unity GameObject in the Unity scene you have created and attach the script below.

  3. Finally, make some prefab objects of what ever you want to place at the finger tips, and drag those into the public GameObject fields (“fingerObject” and “wristObject”) on this script in the Inspector in Unity.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Microsoft;
using Microsoft.MixedReality.Toolkit.Utilities;
using Microsoft.MixedReality.Toolkit.Input;

public class FollowTrackedFingers : MonoBehaviour
{

    public GameObject fingerObject;
    public GameObject wristObject;

    public float dist = 0f;


    Vector3[] tipsL = new Vector3[5];
    Vector3 wristL = Vector3.zero;
    Vector3[] tipsR = new Vector3[5];
    Vector3 wristR = Vector3.zero;

    List<GameObject> fingerObjectsL = new List<GameObject>();
    List<GameObject> fingerObjectsR = new List<GameObject>();

    GameObject wristObjectL;
    GameObject wristObjectR;

    MixedRealityPose pose;


    void Start()
    {
        for (int i = 0; i < 5; i++)
        {
            GameObject obj = Instantiate(fingerObject, this.transform);
            fingerObjectsL.Add(obj);
            GameObject obj2 = Instantiate(fingerObject, this.transform);
            fingerObjectsR.Add(obj2);

        }
        wristObjectL = Instantiate(wristObject, this.transform);
        wristObjectR = Instantiate(wristObject, this.transform);


    }

    void Update()
    {
        //only render if hand is tracked
        for(int i = 0; i < 5; i++)
        {
            fingerObjectsL[i].GetComponent<Renderer>().enabled = false;
            fingerObjectsR[i].GetComponent<Renderer>().enabled = false;

        }
        wristObjectL.GetComponent<Renderer>().enabled = false;
        wristObjectR.GetComponent<Renderer>().enabled = false;



        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.ThumbTip, Handedness.Left, out pose))
        {
            tipsL[0] = pose.Position;
            fingerObjectsL[0].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.IndexTip, Handedness.Left, out pose))
        {
            tipsL[1] = pose.Position;
            fingerObjectsL[1].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.MiddleTip, Handedness.Left, out pose))
        {
            tipsL[2] = pose.Position;
            fingerObjectsL[2].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.RingTip, Handedness.Left, out pose))
        {
            tipsL[3] = pose.Position;
            fingerObjectsL[3].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.PinkyTip, Handedness.Left, out pose))
        {
            tipsL[4] = pose.Position;
            fingerObjectsL[4].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.Wrist, Handedness.Left, out pose))
        {
            wristL = pose.Position;
            wristObjectL.GetComponent<Renderer>().enabled = true;

        }








        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.ThumbTip, Handedness.Right, out pose))
        {
            tipsR[0] = pose.Position;
            fingerObjectsR[0].GetComponent<Renderer>().enabled = true;
        }

        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.IndexTip, Handedness.Right, out pose))
        {
            tipsR[1] = pose.Position;
            fingerObjectsR[1].GetComponent<Renderer>().enabled = true;
        }

        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.MiddleTip, Handedness.Right, out pose))
        {
            tipsR[2] = pose.Position;
            fingerObjectsR[2].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.RingTip, Handedness.Right, out pose))
        {
            tipsR[3] = pose.Position;
            fingerObjectsR[3].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.PinkyTip, Handedness.Right, out pose))
        {
            tipsR[4] = pose.Position;
            fingerObjectsR[4].GetComponent<Renderer>().enabled = true;
        }


        if (HandJointUtils.TryGetJointPose(TrackedHandJoint.Wrist, Handedness.Right, out pose))
        {
            wristR = pose.Position;
            wristObjectR.GetComponent<Renderer>().enabled = true;
        }





        for (int i = 0; i < 5; i++)
        {

            fingerObjectsL[i].transform.position = tipsL[i] + (tipsL[i] - wristL) * dist;
            fingerObjectsR[i].transform.position = tipsR[i] + (tipsR[i] - wristR) * dist;
            wristObjectL.transform.position = wristL;
            wristObjectR.transform.position = wristR;
        }
    }
}




Contour and false-colour height map shader for Unity

A shader for Unity allowing you to map real-world y-position to HSV colour with black contour lines. Also on GitHub here.

Instructions for use:

  1. In Unity, under the project window do Create > Shader > Unlit Shader, then open the newly created shader in Visual Studio, delete all the code and paste the below code and save.

  2. Create > Material and in the newly created Material, set the shader to Custom/ContourColour

  3. Drag and drop material onto GameObject with mesh and renderer.

 
bunny_contour_map.png
 
//CG code for Unity shader.

Shader "Custom/ContourColour"
{
    Properties{
        _HueShift("HueShift", Float) = 0
        _HueScale("HueScale", Float) = 1

        _LineFillRatio("LineFillRatio", Range(-0.001, 1)) = 0.3
        _LineScale("LineScale", Float) = 1

        _Brightness("Brightness", Range(-0.001, 2)) = 0.7
        _Contrast("Contrast", Range(-0.001, 1.5)) = 1
    }
        SubShader
    {
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            float _HueShift;
            float _HueScale;
            float _LineFillRatio;
            float _LineScale;
            float _Brightness;
            float _Contrast;

            half4 hsv_to_rgb(float3 HSV)
            {
                half4 RGB = HSV.z;

                        float var_h = HSV.x * 6;
                        float var_i = floor(var_h);   // Or ... var_i = floor( var_h )
                        float var_1 = HSV.z * (1.0 - HSV.y);
                        float var_2 = HSV.z * (1.0 - HSV.y * (var_h - var_i));
                        float var_3 = HSV.z * (1.0 - HSV.y * (1 - (var_h - var_i)));
                        if (var_i == 0) { RGB = half4(HSV.z, var_3, var_1,1); }
                        else if (var_i == 1) { RGB = half4(var_2, HSV.z, var_1,1); }
                        else if (var_i == 2) { RGB = half4(var_1, HSV.z, var_3,1); }
                        else if (var_i == 3) { RGB = half4(var_1, var_2, HSV.z,1); }
                        else if (var_i == 4) { RGB = half4(var_3, var_1, HSV.z,1); }
                        else { RGB = half4(HSV.z, var_1, var_2, 1); }

                return (RGB);
            }


            struct v2f {
                float4 vertex : SV_POSITION;
                float3 localPos : TEXCOORD0;
                half3 worldNormal : TEXCOORD1;

                //fixed4 color : COLOR;
            };



            v2f vert(appdata_base v)
            {
                v2f o;

                o.localPos = v.vertex.xyz;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.worldNormal = UnityObjectToWorldNormal(v.normal);

                return o;
            }

            fixed4 frag(v2f i) : SV_Target
            { 

                half4 col =  hsv_to_rgb(float3((i.localPos.y * _HueScale + 100000 + _HueShift) % 1.0, 1, 1));
                half4 lines = round((i.localPos.y * _LineScale + 100000) % 1.0 - _LineFillRatio + 0.5) * half4(1, 1, 1, 1);
                half4 normals = ((i.worldNormal.z) * _Contrast + _Brightness)*half4(1,1,1,1);

                return lines * col * normals;
            }

            ENDCG
        }
    }
}