ALU

VR

iQiyi: What to expect from the integration of AI and entertainment

iQiyi: What to expect from the integration of AI and entertainment · TechNode

Original Link

These 5 business applications show VR isn’t dying

These 5 business applications show VR isn’t dying · TechNode

Original Link

Dell Technology World 2018 Part I — Announcement Summary

This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3), I traveled to Las Vegas Nevada (LAS) to attend Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, talk of AI, ML, DL, IoT, VR, analytics, VDI, SDDC, data infrastructure, Gen-Z, composable among other topics. There were also plenty of meetings and hallways and event networking taking place at Dell Technology World DTW 2018.

Major data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA) solid state device (SSD)NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements

Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members, including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.

Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware, and VR among others.

Image title

Dell Technology World DTW 2018 Event and Venue

Dell Technology World 2018 was located at the combined Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was to “make it real”, which in some ways was interesting, given the focus on virtual, including virtual reality (VR), software-defined data center (SDDC) virtualization, data infrastructure topics, and artificial intelligence (AI).

Image title

Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center

There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.

Image title

Dell Technology World Drone Flying Area

During a break from some meetings, I used a few minutes to fly a drone using VR, which was interesting. I Have been operating drones (See some videos here) visually without dependence on the first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted, I encountered a bit of vertigo that I had to get used to.

Image title

More views of the Dell Technology World Village and Commons Area with VR activity

Image title

Dell Technology World Bean Bag Area

Dell Technology World 2018 Announcement Summary

Ok, enough with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.

What was announced at Dell Technology World 2018 included, among others:

Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.

Where to Learn More

Learn more about Dell Technology World 2018 and related topics via the following links:

What This All Means

On the surface, it may appear that there was not much announced at Dell Technology World 2018, particularly compared to some of the recent Dell EMC Worlds and EMC Worlds. However, it turns out that there was a lot announced, granted, without some of the entertainment and the circus-like atmosphere of previous events. Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with Part III here, Part IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and Part V (servers and converged) here.

Ok, nuff said, for now.

Cheers Gs

Original Link

Day 15 of 100 Days of VR: Survival Shooter – Adding Shooting, Hit, and More Walking Sound Effects!

On Day 15, we’re going to continue adding more sound effects to our existing game, specifically the

  • Shooting sound effect
  • Sound of the knight getting hit
  • Player walking

Today is going to be a relatively short day, but let’s get started!

Adding Enemy Hit Sounds

To start off, we’re going to do something like in Day 14. We’re going to create Audio Source Components in code and play the sound effects from there.

For shooting, we need to add our code to EnemyHealth:

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 100; public AudioClip[] HitSfxClips; public float HitSoundDelay = 0.5f; private Animator _animator; private AudioSource _audioSource; private float _hitTime; void Start() { _animator = GetComponent<Animator>(); _hitTime = 0f; SetupSound(); } void Update() { _hitTime += Time.deltaTime; } public void TakeDamage(float damage) { if (Health <= 0) { return; } Health -= damage; if (_hitTime > HitSoundDelay) { PlayRandomHit(); _hitTime = 0; } if (Health <= 0) { Death(); } } private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; } private void PlayRandomHit() { int index = Random.Range(0, HitSfxClips.Length); _audioSource.clip = HitSfxClips[index]; _audioSource.Play(); } private void Death() { _animator.SetTrigger("Death"); }
}

The flow of our new code is:

  1. We create our Audio Source component in SetupSound() called from Start()
  2. We don’t want to play the sound of the knight being hit every time we hit it, that’s why I set a _hitTime in Update() as a delay for the sound
  3. Whenever an enemy takes damage, we see if we’re still in a delay for our hit, if we’re not, we’ll play a random sound clip we added.

The code above should seem relatively familiar as we have seen it before in Day 14.

Once we have the code setup, the only thing left to do is to add the Audio clips that we want to use, which in this case is Male_Hurt_01Male_Hurt_04.

That’s about it. If we were to shoot the enemy now, they would make damage hits.

Player Shooting Sounds

The next sound effect that we want to add is the sound of our shooting. To do that, we’re going to make similar adjustments to the PlayerShootingController.

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; public AudioClip ShotSfxClips; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; private AudioSource _audioSource; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; SetupSound(); } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } else if (!Input.GetMouseButton(0)) { _audioSource.Stop(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); _audioSource.Play(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } if (health != null) { health.TakeDamage(1); } } } private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; _audioSource.clip = ShotSfxClips; }
}

The flow of the code is like the previous ones, however for our gun, I decided that I want to use the machine gun sound instead of individual pistol shooting.

  1. We still setup our Audio Component code in Start().
  2. The interesting part is that in Update(), we play our Audio in Shoot() and as long as we’re holding down the mouse button, we’ll continue playing the shooting sound and when we let go, we would stop the audio.

After we added our script, we attach Machine_Gunfire_01to the script component.

Player Walking Sound

Last but not least, we’re going to add the player walking sound in our PlayerController

using UnityEngine; public class PlayerController : MonoBehaviour { public float Speed = 3f; public AudioClip[] WalkingClips; public float WalkingDelay = 0.3f; private Vector3 _movement; private Rigidbody _playerRigidBody; private AudioSource _walkingAudioSource; private float _timer; private void Awake() { _playerRigidBody = GetComponent<Rigidbody>(); _timer = 0f; SetupSound(); } private void SetupSound() { _walkingAudioSource = gameObject.AddComponent<AudioSource>(); _walkingAudioSource.volume = 0.8f; } private void FixedUpdate() { _timer += Time.deltaTime; float horizontal = Input.GetAxisRaw("Horizontal"); float vertical = Input.GetAxisRaw("Vertical"); if (horizontal != 0f || vertical != 0f) { Move(horizontal, vertical); } } private void Move(float horizontal, float vertical) { if (_timer >= WalkingDelay) { PlayRandomFootstep(); _timer = 0f; } _movement = (vertical * transform.forward) + (horizontal* transform.right); _movement = _movement.normalized * Speed * Time.deltaTime; _playerRigidBody.MovePosition(transform.position + _movement); } private void PlayRandomFootstep() { int index = Random.Range(0, WalkingClips.Length); _walkingAudioSource.clip = WalkingClips[index]; _walkingAudioSource.Play(); }
}

Explanation

This code is like what we’ve seen before, but there were some changes made.

  1. As usual, we create the sound component in Start() and a walking sound delay.
  2. In Update(), we made some changes. We don’t want to play our walking sound whenever we can. We only want to play it when we’re walking. To do this, I added a check to see if we’re moving before playing our sound in Move().

Also, notice that the audio sound is 0.8 as opposed to our other sounds. We want our sound to be louder than the other players so we can tell the difference between the player walking and the enemy.

After writing the script, we don’t forget to add the sound clips. In this case, I just re-used our footsteps by using Footstep01Footstep04.

Conclusion

I’m going to call it quits for today for Day 15!

Today, we added more gameplay sound into the game so when we play, we have a more complete experience.

I’m concerned about what happens when we have more enemies and how that would affect the game, but that’ll be for a different day!

Original Link

Day 14 of 100 Days of VR: Survival Shooter – Finish Attacking the Enemy and Walking Sounds in Unity

We’re back on Day 14. I finally solved the pesky problem from Day 13 where the Knight refuses to get pushed back when we shoot at him.

Afterwards, I decided to get some sound effects to make the game a little livelier.

Without delay, let’s get started!

Adding Player Hit Effects – Part 2

As you might recall, we last ended up trying to push back the Knight when we shoot them by changing the Knight’s velocity, however the Knight continues to run forward.

The Problem

After a long investigation, it turns out that Brute running animation that I used naturally moves your character’s position forward.

The Solution

After finally searching for unity animation prevents movement, I found the answer on StackOverflow.

In the animator, disable Apply Root Motion and then we must apply the movement logic ourselves (which we already are).

Writing the Knockback Code

Once we have our Root Motion disabled. We’re relying on our code to move our knight.

The first thing we need to do is update our PlayerShootingController script to call the knockback code:

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } if (health != null) { health.TakeDamage(1); } } }
}

The biggest change is that we get our EnemyMovement script and then call KnockBack() which we haven’t implemented yet.

Once we have this code in, we need to implement KnockBack() inside our EnemyMovement script. Here’s what it looks like:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ public float KnockBackForce = 1.1f; private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); } void Update () { if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); } else { _nav.enabled = false; } } public void KnockBack() { _nav.velocity = -transform.forward * KnockBackForce; }
}

I know this was a one-liner for KnockBack(), but there was a lot of work involved to get to this point.

Here’s how the code works:

  1. When our shooting code hits the enemy, we call KnockBack() which sets the velocity to be the direction behind the knight, making the illusion of being pushed back.
  2. This is only temporary as our Nav Mesh Agent will come back and move our Knight towards the player in the next Update()
  3. Here’s how KnockBackForce effects the velocity
    1. At 1, the knight stays in place when you shoot
    2. <1, the knight gets slowed down
    3. >1, the knight gets pushed back

Adding Sound Effects

Now that we finally solved the knockback problem, we moved on to the next thing.

At this point, playing the game seems dull. Do you know what could make things a little bit more interesting? Sound effects!

I went back to the Unity Asset Store to find sound effect assets specifically:

  1. Player shooting sound
  2. Player walking sound
  3. Player hit sound
  4. Enemy hit sound
  5. Enemy running sound
  6. Enemy attack sound

Randomly searching on Unity, I found the Actions SFX Vocal Kit which contains everything we need. Fantastic!

Once we have finished downloading and importing the SFX into our Unity project, we’ll start using them.

Adding Enemy Hit Sound Effects

Adding the Script

The first thing we’re going to do is that we need to add our Male_Hurtaudio clips to our Knight.

Normally, we need to add an Audio Source component for our Knight. However, before that, let’s step back and think: what sounds do our knight need to play?

  1. Hit sound
  2. Walking sound
  3. Attack sound

If we were to add an Audio Source component to the Knight Object and use that to play the sound, what will happen is that one sound will immediately be replaced by the other one. We don’t want that.

What we could do is create multiple AudioSources components and then manually attach them to our script, however that’s not very scalable if we ever decided that we needed more types of sounds.

Instead, I found this great way to add multiple audio sources on a single GameObject.

The idea is that instead of manually creating multiple components and then attaching them to a script component, why not create the component in code?

Here’s what I did:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ public float KnockBackForce = 1.1f; public AudioClip[] WalkingClips; public float WalkingDelay = 0.4f; private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; private AudioSource _walkingAudioSource; private float _time; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); SetupSound(); _time = 0f; } void Update () { _time += Time.deltaTime; if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); if (_time > WalkingDelay && _animator.GetCurrentAnimatorStateInfo(0).IsName("Run"))) { PlayRandomFootstep(); _time = 0f; } } else { _nav.enabled = false; } } public void KnockBack() { _nav.velocity = -transform.forward * KnockBackForce; } }

There’s a lot of code that was added in, but I tried to separate them as much as I can to easy to understand pieces.

Here’s the flow:

  1. In Start(), we instantiate our new private fields, specifically our new variables:
    1. _walkingAudioSource: our AudioSource for our steps
    2. _time: to track how long the enemy steps take
  2. We call SetupSound() from Start() and create a new instance of an AudioSource that will only appear when the game starts and we set the volume to 0.2f
  3. In Update(), we add logic to play the stepping sound whenever it has been 0.2 seconds and if we’re still in the running animation.
    1. Note: In GetCurrentAnimatorStateInfo(0), the 0 refers to index 0 layer, which I’m not really sure why, but that’s what people use. From there, we can check which state the knight is in.
  4. In PlayRandomFootstep(), we randomly choose the walking sound clips that we downloaded and play them.

Once we have all of this, we need to add the audio clips in.

Go to EnemyMovement script attached to the Knightand then under Walking Clips,change the size to 4. We can do this because Walking Clips is an array of clips.

Then, add in Footstep01-04 into each spot. Make sure that Walking Delay is set to 0.4 if it’s not already.

Run the game and you’ll see that the enemy makes running sounds now!

If you’re using a different animation, you might have to change the Walking Delay to match the animation, but on the high level, that’s what you must do!

Whenever the knight attacks us, the sound will stop and whenever the knight resumes running after us (with the help of some shooting knockback), the running sound will resume!

Conclusion

Today on Day 14, we found the problem with the knight knockback had something to do with the root animation we used.

After disabling it, we can start adding our knockback code without any problems.

With the knockback implemented, the next thing that we added was sound effects. We found some assets in the Unity store and then we added them to our enemy, where for the first time, we created a component via code.

My concern at this point is what happens when we start spawning a lot of knights? Will that create an unpleasant experience?

Either way, come back tomorrow for Day 15, where I decided I’m going to add the enemy hit sound and the player shooting sound.

Original Link

Day 13 of 100 Days of VR: Attacking Enemies, Health System, and Death Animation in Unity

Welcome back to day 13 of the 100 days of VR! Last time, we created enemy motions that used the Nav Mesh Agent to help us move our enemy Knight.

We added a trigger collider to help start the attack animations when the enemy got close to the player.

Finally, we added a mesh collider to the body of the knight so when it touches the player during its attack animation, we’ll be able to use the damage logic.

Today, we’re going to go on and implement the shooting logic for our player and to fix the annoying bug where the player would be perpetually moving after they come in contact with any other colliders.

Fixing the Drifting Problem

My first guess at what the problem is that something must be wrong with our Rigid Body component of the player.

If we recall, the Rigid Body is in charge Unity’s physics engine on our player.

According to the documentation for RigidBody, the moment that anything collides with our player, the physics engine will exert velocity on us.

At this point, we have 2 options:

  • Set our velocity to be 0 after any collision.
  • Make our drag value higher.

What is drag? I didn’t really understand it the first time we encountered it either, but after doing more research, specifically reading it here in Rigidbody2D.drag, drag is how long it takes for an object to slow down over friction. Specifically, the higher the faster it is for us, the faster for us to slow down.

I switched the drag value in the RigidBody from 0 to 5.

I’m not sure what the value represents, but before our velocity never decreased from friction because of our drag value, but after we added one in, we’ll start slowing down over time.

Adding the Enemy Shooting Back Into the Game

After solving the drag problem, we’re finally going back to the main portion of the game: shooting our enemy.

There will be 2 places that we’re going to have to add our code in: EnemyHealth and EnemyMovement.

EnemyHealth

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 10; private Animator _animator; void Start() { _animator = GetComponent<Animator>(); } public void TakeDamage(float damage) { if (Health <= 0) { return; } Health -= damage; if (Health <= 0) { Death(); } } private void Death() { _animator.SetTrigger("Death"); }
}

Here’s the new flow of the code we added:

  1. In Start(), we instantiate our Animator that we’ll use later to play the death animation.
  2. In TakeDamage() (which is called from the PlayerShootingController) when the enemy dies, we call Death()
  3. In Death(), we set death trigger to make the Knight play the death animation.

Next, we need to make a quick change to EnemyMovement to stop our Knight from moving when it dies.

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); } void Update () { if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); } else { _nav.enabled = false; } }
}

Here’s the code flow:

  1. In Start(), we grab the EnemyHealth script so we can access the knights health.
  2. In Update() if the knight is dead, we disable the Nav Mesh Agent, otherwise it continues walking like normal.

Now when we play the game, the knight enters the death state when defeated, like so:

Improving Shooting Mechanics

At this point, you might notice a problem….

…Okay, I know there are many problems, but there are two specific problems I’m referring to.

  1. The knight dies almost instantly whenever we shoot.
  2. When we shoot, we don’t really have anything happen to the enemy to make us feel we even shot them.

So we’re going to fix these problems.

Adding a Shooting Delay

Right now, we always shoot a raycast at the enemy knight whenever Update() detects that our mouse is held down.

So, let’s add a delay to our Player Shooting Controller script.

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); if (health != null) { health.TakeDamage(1); } } }
}

Here’s the logic for what we added:

  1. We created our time variables to figure out how long we must wait before we shoot again
  2. In Update(), if we waited long enough, we can fire again
    1. Side note: I decided to move all of the shooting code into Shoot()
  3. Inside Shoot(), because the player fired, we’ll reset our timer and begin waiting until we can shoot again.

Adding Player Hit Effects

Setting Up the Game Objects

When we shoot our enemy knight, nothing really happens. He’ll just ignore you and continue walking towards you.

There are a lot of things we can do to make this better:

  1. Add sound effects.
  2. Add damage blood effects.
  3. Push him back.
  4. All of the above.

1) will be added in eventually, 2) might be done, but 3) is what I’m going to implement.

Every time we shoot the knight, we want to push it back. This way if a mob of them swarm at us, we’ll have to manage which one to shoot first.

This little feature took a LONG time to resolve.

The Problem

Whenever we shoot an enemy, we want to push them back, however, the Nav Mesh Agent would override any changes we tried. Specifically, the knight will always continue moving forward.

The Solution

We write some code that changes the velocity of the Nav Mesh Agent to go backwards for a couple of units.

However, when I did that, the knight continued running forward!

Why?

That’s a good question, one that I’m still investigating and hopefully find a solution by tomorrow.

End of Day 13

For the first time ever today, I started on a problem that I couldn’t solve in a day.

I’m expecting this to become more common as we start jumping deeper and deeper.

With that being said, today we fixed the player’s drifting problem by using drag and adding an enemy death animation when they run out of health.

Tomorrow, I’ll continue investigating how I can push the enemy back.

See you all on Day 14! Or whenever I can figure out this knockback problem!

Original Link

100 Days of VR Day 12: Survival Shooter – Creating AI Movements for Enemies in Unity

Here we are on Day 12 of the 100 days of VR. Yesterday, we looked at the power of rig models and Unity’s mechanism system (which I should have learned but ignored in the Survival Shooter tutorial…).

Today, we’re going to continue off after creating our animator controller.

We’re going to create the navigation component to our Knight Enemy to chase and attack the player. As you might recall, Unity provides us an AI pathfinder that allows our game objects to move towards a direction while avoiding obstacles.

Moving the Enemy Toward the Player

Setting Up the Model

To be able to create an AI movement for our enemy, we need to add the Nav Mesh Agent component to our Knight game object. The only setting that I’m going to change is the Speed, which I set to 2.

At this point, we can delete our old enemy game object. We don’t need it anymore.

Next up, we need to create a NavMesh for our enemy to traverse.

Click on the Navigation panel next to the Inspector.

If it’s not there, then click on Window > Navigation to open up the pane.

Under the bake tab, just hit bake to create the NavMesh. I’m not looking to create anything special right now for our character.

Once we finish, we should have something like this if we show the nav that we created.

Make sure that the environment parent game object is set to static!

Creating the Script

At this point, the next thing we need to do is create the script that allows the enemy to chase us.

To do that, I created the EnemyMovement script and attach it to our knight.

Here’s what it looks like right now:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ private NavMeshAgent _nav; private Transform _player; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; } void Update () { _nav.SetDestination(_player.position); }
}

It’s pretty straightforward right now:

  • We get our player GameObject and the Nav Mesh Agent Component.
  • We set the Nav Agent to chase our player.

An important thing that we have to do to make sure that the code works is that we have to add the Player tag to our character to make sure that we grab the GameObject.

After that, we can play the game and we can see that our Knight enemy will chase us.

Using the Attack Animation

Right now, the Knight would run in a circle around us. But how do we get it to do an attack animation?

The first thing we need to do is attach a capsule collider component onto our knight game object and make these settings:

  • Is Trigger is checked
  • Y Center is 1
  • Y Radius is 1.5
  • Y Height is 1

Similar to what we did in the Survival Shooter, when our Knight gets close to us, we’ll switch to an attack animation that will damage the player.

With our new Capsule Collider get into contact with the player, we’re going to add the logic to our animator to begin the attack animation.

First, we’re going to create a new script called EnemyAttack and attach it to our Knight.

Here’s what it looks like:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ Animator _animator; GameObject _player; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } }
}

The logic for this is similar to what we seen in the Survival Shooter. When our collider is triggered, we’ll set our “IsNearPlayer” to be true so that we’ll start the attacking animation and when our player leaves the trigger range, the Knight will stop attacking.

Note: If you’re having a problem where the Knight stops attacking the player after the first time, check the animation clip and make sure Loop Time is checked. I’m not sure how, but I disabled it.

Detecting Attack Animation

Adding a Mesh Collider

So now, the Knight will start the attack animation. You might notice that nothing happens to the player.

We’re not going to get to that today, but we’re going to write some of the starter code that will allow us to do damage later.

Currently, we have a Capsule Collider that will allow us to detect when the enemy is within striking range. The next thing we need to do is figure out if the enemy touches the player.

To do that, we’re going to attach a Mesh Collider on our enemy.

Unlike the previous collider which is a trigger, this one will actually be to detect when the enemy collides with the player.

Make sure that we attach the body mesh that our Knight uses to our Mesh Collider.

I will take note that for some reason the Knight’s mesh is below the floor, however I’ve not encountered any specific problems with this so I decided to ignore this.

Adding an Event to Our Attack Animation

Before we move on to writing the code for when the Knight attacks the player, we have to add an event in the player animation.

Specifically, I want to make it so that when the Knight attacks, if they collide with the player, we’ll take damage.

To do that, we’re going to do something similar to what the Survival Shooter tutorial did. We’re going to add an event inside our animation to call a function in our script.

We have 2 ways of doing this:

  1. We create an Animation event on imported clips from the model
  2. We add the Animation Event in the Animation tab from the animation clip

Since our knight model doesn’t have the animation we added in, we’re going to add our event the 2nd way.

We want to edit our Attack1 animation clip from the Brute Warrior Mecanim pack. inside the Animatortab.

While selecting our Knight Animator Controller, click on Attack1 in the Animator and then select the Animation tab to open it.

If either of these tabs aren’t already opened in your project, you can open them by going to Windows and select them to put them in your project.

Now at this point, we’ll encounter a problem. Our Attack1 animation is read only and we can’t edit it.

What do we do?

According to this helpful post, we should just duplicate the animation clip.

So that’s what we’re going to do. Find Attack1 and hit Ctrl + D to duplicate our clip. I’m going to rename this to Knight Attack and I’m going to move this into my animations folder that I created in the project root directory.

Back in our Animator tab for the Knight Animator Controller, I’m going to switch the Attack1 state to use the new Knight Attack animation clip instead of the previous one.

Next, we’re going to have to figure out what’s a good point to set our trigger to call our code.

To do this, I dragged out the Animation tab and docked it pretty much anywhere else in the window, like so:

Select our Knight object in the game hierarchy and then you can notice that back in the animation tab, the play button is clickable now.

If we click it, we’ll see that our knight will play the animation clip that we’re on.

Switch to Knight Attack and press play to see our attack animation.

From here, we need to figure out where would be a good point to run our script.

Playing the animation, I believe that triggering our event at frame 16 would be the best point to see if we should damage the player.

Next, we need to click the little + button right below 16 to create a new event. Drag that event to frame 16.

From under the Inspector, we can select a function from the scripts attached to play. Right now, we don’t have anything, except for OnTrigger().

For now, let’s create an empty function called Attack() in our EnemyAttack script so we can use:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ Animator _animator; GameObject _player; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } } void Attack() { }
}

All I did was that I added Attack() in.

Now that we have this code, we might have to re-select the animation for the new function to be shown, but when you’re done, you should be able to see Attack() and we should have something like this now:

Updating Our EnemyAttack Script

So now that we finally have everything in our character setup, it’s finally time to get started in writing code.

So back in our EnemyAttack script, here’s what we have:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ private Animator _animator; private GameObject _player; private bool _collidedWithPlayer; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } print("enter trigger with _player"); } void OnCollisionEnter(Collision other) { if (other.gameObject == _player) { _collidedWithPlayer = true; } print("enter collided with _player"); } void OnCollisionExit(Collision other) { if (other.gameObject == _player) { _collidedWithPlayer = false; } print("exit collided with _player"); } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } print("exit trigger with _player"); } void Attack() { if (_collidedWithPlayer) { print("player has been hit"); } }
}

Here’s what I did:

  1. Added OnCollisionExit() and OnCollisionEnter() to detect when our Mesh Collider comes into contact with our player.
  2. Once it does, we set a boolean to indicate that we’ve collided with the enemy.
  3. Then when the attack animation plays, at exactly frame 16, we’ll call Attack(). If we’re still in contact with the Mesh Collider, our player will be hit. Otherwise, we’ll successfully have dodged the enemy.

And that’s it!

Play the game and look at the console for the logs to see when the knight gets within attacking zone, when he bumps into the player, and when he successfully hits the player.

There’s actually quite a bit of ways we could have implemented this and I’m not sure which way is correct, but this is the thing I have come up with.

Other things that we could have done, but didn’t was:

  1. Made it so that if we ever come in contact with the enemy, whether attacking or not, we would take damage.
  2. Created an animation event at the beginning of Knight Attack and set some sort of _isAttackingboolean to be true and then in our Update(), if the enemy is attacking and we’re in contact with them, the player takes damage, then set _isAttacking to be false, so we don’t get hit again in the same animation loop.

Conclusion

And that’s that for day 11! That actually took a lot longer than I thought!

Initially, I thought it would be simply applying the Nav Mesh Agent like we did in the Survivor Shooter game, however, when I started thinking about attack animations, things became more complicated and I spent a lot of time trying to figure out how to damage the player ONLY during the attack animation.

Tomorrow, I’m going to update the PlayerShootingController to be able to shoot our Knight enemy.

There’s a problem in our script. Currently, whenever we run into an enemy, for some strange reason, we’ll start sliding in a direction forever. I don’t know what’s causing that, but we’ll fix that in another day!

Original Link

Day 10: Survival Shooter – Creating an Enemy

Welcome to a very special day of my 100 days of VR. Day 10! That’s right. We’re finally in the double digits!

It’s been an enjoyable experience so far working with Unity, especially now that I know a bit more about putting together a 3D game now.

We haven’t made it into the actual VR aspects of the game, but we were able to get some foundational skills for Unity, which I’m sure will help translate into the skills needed to create a real VR experience.

We’re starting to get the hang of what we can use in Unity to make a game. Yesterday, we created the beginning of the shooting mechanism.

Currently, whenever we hit something, we just print out what we hit. Today, we’re going to go in and create an enemy player that we can shoot and make some fixes.

Updating the Shooting Code

The first thing I would like to fix is that when we shoot, we shoot at whatever our cursor is pointing at, which is kind of weird.

Locking the Cursor to the Middle

This can be easily fixed by adding:

Cursor.lockState = CursorLockMode.Locked;

To Start() in our PlayerShootingController script:

We’ll have something like this:

void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked;
}

Now when we try to play the game, our cursor will be gone. It’ll be in the middle of the screen, we just can’t see it.

Adding a Crosshair

At this point, we want some indicator to show where our “center” is.

To do this, we’re going to create an UI crosshair that we’ll put right in the middle.

In the hierarchy, add an Image which we will call Crosshair. By doing this, Unity will also create a Canvasfor us. We’ll call that HUD.

By default, our crosshair is already set in the middle, but it’s too big. Let’s make it smaller. In the Rect Transform, I set our image to have Width and Height 10, 10.

You should have something like this now:

Image title

Before we do anything else, we need to make sure that our mouse collider doesn’t send a raycast onto our UI elements.

In HUD, attach a Canvas Group component and from there, uncheck Interactableand Blocks Raycasts. As you might recall, the Canvas Group component will allow us to apply these 2 settings to its children without us having to manually do it ourselves.

Go ahead and play around with it. If we observe our console, whenever we fire, we hit where our little “crosshair” is located at.

Creating Our Enemy

So now, we fixed our cursor to be the center, the next thing we need to do is to create an enemy.

We’ll improve upon this, but for now, let’s create our first enemy! A cube!

Add a Cubeto your hierarchy, name it Enemy, and then drag it near our player.

Boom! First enemy!

Image title

Now currently, nothing really happens when you shoot at it, so let’s fix it by adding an enemy health script. We’ll call it EnemyHealth.

Here’s what the code looks like:

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 10; public void TakeDamage(float damage) { Health -= damage; if (Health <= 0) { Destroy(gameObject); } }
}

It’s relatively simple:

  1. We have our health
  2. We have a public function that we’ll call our player hits the enemy that’ll decrease the enemies HP
  3. When it reaches 0, we make our enemy disappear

Now before we update our script, let’s make some optimizations to our raycast.

Go to our Enemy game object and then set its layer to Shootableif it doesn’t exist (which it most likely doesn’t), create a new layer, call it Shootable, and then assign it to the Enemy layout.

Now let’s go back to our PlayerShootingController and grab the EnemyHealth script that we just created and make them take damage:

using UnityEngine; public class PlayerShootingController: MonoBehaviour { public float Range = 100; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; void Start() { _camera = Camera.main; _particle = GetComponentInChildren < ParticleSystem > (); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); } void Update() { if (Input.GetMouseButton(0)) { Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent < EnemyHealth > (); if (health != null) { health.TakeDamage(1); } } } }
}

The changes we’ve done is very similar to what we have seen before with Survival Shooter, but here’s the addition that we added:

  1. We created our LayerMask for our Shootable layer and passed it into our Raycast function:
    1. Note, I tried to use an int at first to represent our LayerMask, but for some reason, the Raycast ignored the int. From searching around online, I found that instead of using the intrepresentation, we should just try the actual LayerMask object. When I gave that a try, it worked…. So yay?
  2. Next, when we hit an object, which at this point, can only be Enemy, we grab the EnemyHealth script that we added and then we make the enemy take 1 damage. Do this 10 times and the enemy will die.

Now with this script attached to our enemy, shoot our cube 10 times (which should happen really fast), and then BOOM, gone.

Conclusion

And that’s about as far as I got for Day 10! Today was a bit brief, because I didn’t get much time to work, but I think we made some good progress!

We created the basis for an enemy and added a basic crosshair UI that we can use. Tomorrow, I’m going to start looking into seeing how to add an enemy from the Unity Asset Store into the game.

Until then, I’ll see you all on day 11!

Original Link

Future of UX Design: What to Expect in 2018

At DashBouquet, we are keen to learn new things and constantly improve our skills and knowledge. That’s why we always keep an eye on the most modern trends in order to stay atop the competition and not only satisfy the requirements of our clients but also the needs of the users. So we thought it would be a good idea to share with you the most expected UX design trends for 2018 and see why they will matter so much.

People Expect Immediacy

Image titleToday the world revolves around the concept of immediacy. Snapchat and Instagram stories, real-time streaming, and much more — people are now used to the fact that they can always see what others are doing right now and they expect such options to be in almost every app they use.

Original Link

Top UI Design Trends For 2018

Image title

Gone are the days when functionality used to drive the usage of web applications and any kind of software; developers cared less about what the software looked like. Software was driven mainly by what it does and little by its looks.

However, things have changed, technology has evolved, and user experience is now taken very seriously. The user has become very important so the software has to look good enough to be used without hassle. Besides, the software is useless without a user.

As the year 2018 progresses, UI designs would continue to improve, so it would be great to have a look at some of the trends we expect UI designs to experience this year.

Fullscreen Videos at the PeakImage title

In a world where time is precious and almost everyone is trying to soak in a lot of information in a short period, the usage of videos have saved people more time and increased productivity.

In the past, write-ups alone used to be the major means of passing information, then images seemed to be a better option. Right now, videos seem to be better and would be used as a means of passing information on websites this year.

Videos are interactive, extremely dynamic and engaging. They catch the attention of viewers very easily and from the designer‘s angle, they are very beautiful on single page websites.

Gradients are Beautiful

Image title

Gradients help create very beautiful designs, and they could be very critical in developing quality user interfaces.

With websites such as Spotify tapping into the potentials of the creating designs with gradients, it shouldn‘t take long before other websites and software begin tapping into it.

Really, not many things are more beautiful than a perfect combination of colours.

2018 would bring about a perfect mix of bright and cool colours, appropriate contrast and mix levels leaving the user wanting more of great designs.

Long Forms Suck, but They Will StayImage title

As stated earlier, today‘s users have little or no time to read through thousands of words, so we all skip long content and search for what really matters to us.

Rarely does anyone read the Terms And Conditions or the EULA—all we do is to scroll and search for the accept button to be clicked.

However, sometimes long written content is actually needed. They are needed to pass important information in details and this sort of content leads to unconscious scrolling and neglect of the information in the long form.

In 2018, this issue should be tackled. With new UI designs making headway, a design that takes this into consideration would definitely be appreciated.

This leads us to the next:

Cards Are Here to StayImage title

The use of cards in UI designs have gained lots of acceptance ever since its use became popular.

In a world where users would get to access software more from their mobile devices than from their computer screens, the cards design would continue to have a great impact on the user experience.

Looking back at the long form situation, cards could possibly help tackle the issue especially if users are able to swipe through the content like a pack of cards instead of the traditional scrolling that has to be done on many websites.

Expect more improvement in the card design in 2018, for as long as mobile interface remains important, cards will continue to enjoy a high level of acceptance.

Saying Our Final Goodbyes to the GridImage title

The grid has contributed greatly in the creating of beautiful user interfaces on websites and other software, as it allows for easy navigation.

However, the status quo has definitely been challenged by none other than Apple, who has been pushing the acceptance of gridless display.

This has clearly been a good development, as the gridless approach hasn’t had a negative impact on user experience as feared, and it has also given the designer a better shot at showcasing creativity and unleashing the best of design techniques while developing user interfaces.

Admittedly, the grid still has a huge part to play as it offers consistency and balance to user interfaces, giving users a great onsite experience as it always has.

The Bolder the Font, the Better

Image title

It is no news that typography plays a huge role in creating great user interfaces.

Regardless of content type, be they articles, images, or videos, fonts are highly needed as they not just pass information they are also a means of adding beauty to the display.

Since fonts are of this great importance, it would be logical to make them bigger, bolder and beautiful and that’s exactly what is going to happen as this year progresses.

UI designs would experience bigger typography than before. This development is definitely not totally new to UI designers, as the use of flat design has gained popularity over the last three years and the use of the bigger font is simply an improvement on that design.

Material Design Remains in the GameImage title

Material design gained popularity about 4 years ago as Google used this design in the Android KitKat user interface. It gained a lot of acceptance due to its simplicity, sharpness, and lightness.

One would expect it to die out soon, considering it has been in and around the UI design environment for over four years.

However, that is not the case. Google is improving on it this year and we all would be waiting to see if it’s a hit or a miss.

Perhaps then, a material design would gain the tag “Cat with nine lives.”

Virtual Reality and Augmented Reality Come into the PictureImage title

The rate at which Virtual Reality and Augmented Reality is being used increases daily. With Chief Technology Officers of various companies trying to find ways for their companies to tap into the great potentials of this great technology.

The use of VR and AR is expected to increase this year, so the need to produce great designs for those platforms will experience an increase, too.

VR and AR will capture the attention of wider masses and create room for more opportunities for product designers to show their creativity.

Conclusion

In the end, a lot of factors will determine what UI designs would turn out to be popular this year, so we really can‘t be so sure about what will trend in the end, but one thing that is sure is that a lot of things will be experimented upon and UI designs would definitely experience an improvement in 2018.

With the development of new UI design tools, one can only expect the best to happen in 2018.

Original Link

Mozilla Announces Firefox Reality Browser for Mixed Reality, GnuCash 3.0 New Release and More

Mozilla announced Firefox Reality today, “Bringing the Immersive Web to Mixed Reality Headsets”. Firefox Reality is the only open source browser for mixed reality and the first cross-platform browser for mixed reality. See The Mozilla Blog for more details.

GnuCash 3.0 was released yesterday, marking the first stable release in the 3.x series. This version has several new features, but the main update is the use of the Gtk+-3.0 Toolkit and the WebKit2Gtk API. See the announcement for a list of all the new features for both users and developers.

Kernel 4.17 will have 500,000 fewer lines of code, as maintainers have decided to deprecate support for old CPU architectures. As written in the pull request on the LKML, “This removes the entire architecture code for blackfin, cris, frv, m32r, metag, mn10300, score, and tile, including the associated device drivers.”

Compete in the second annual Linux Game Jam! Submissions will be accepted starting April 5th and the deadline is April 14th. This year’s theme is “Versatile Verbs”. See the website for all the rules.

OpenSSH 7.7 was released this morning. This version is primarily a bugfix release.

And in other new releases, the OpenBSD team announced new version 6.3 yesterday. This update features SMP support on arm64 and multiple security improvements, including Meltdown/Spectre (variant 2) mitigations. See the release page for the complete list of changes.

Original Link

Linux 4.16 Released, SLES SP3 for Raspberry Pi, Cloudflare Launches the 1.1.1.1 Privacy-First DNS Service and More

News briefs for April 2, 2018.

Linux 4.16 was released yesterday. Linus says “the take from final week of the 4.16 release looks a lot like rc7, in that about half of it is networking. If it wasn’t for that, it would all be very small and calm.”

SUSE recently released SLES SP3 for the Raspberry Pi, which includes full commercial support for enterprise users. The new version “targets the Raspberry Pi Model 3 B, although SUSE says it is planning support for the new Raspberry Pi Model 3 B+”. In addition, SUSE “developers have made the new image smaller—around 630MB—by trimming compilers and debugging tools while tuning the Arm OS for IoT tasks”. For more details, see the ZDNet article.

Cloudflare announced yesterday the launch of 1.1.1.1, the “the Internet’s fastest, privacy-first consumer DNS service”. Cloudflare is focused on privacy, and it has “committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours”.

Arcan is working on developing Safespaces, “an open source VR desktop”, designed to run on the Arcan display server. See the Arcan blog for more information and a demo video. You can check out the code on GitHub.

Everspace, a 3D single-player space-shooter game, is officially coming to Linux soon. Rockfish games announced it’s planning to release a patch with bugfixes and improved joystick support in two to four weeks, adding “We also hope to announce the official Linux release, then!”

Original Link

Biggest Names in Development Industry – DeveloperWeek 2018 Part 2

In my previous blog post, I had a look at some of the award winners of DeveloperWeek 2018. In today’s post I will continue to overview them and the latest trends in development and modern IT world.

DevOps

ElectricFlow

ElectricFlow

Adaptive Release Automation

Electric Cloud is a company that strives to simplify Ops and help organizations deliver better software at faster pace. Its DevOps Release Automation powers continuous delivery (CD). CD aims for keeping the software release-ready and offers a repeatable and reliable way to deploy software to any environment.

So ElectricFlow is basically this Release Automation that we talked about. It allows teams to coordinate releases upon demand, automate deployment at any scale and also offers tracking and measurement tools. So no need to say that the company got its award for a reason – their solution takes development to new and more efficient level.

ElectricFlow dashboard

IoT Software

InfluxEnterprise

Image title

The Modern Engine for Metrics and Events

Influx Data offers an open-source, modern Time Series Platform. The company carefully reviewed and considered one of the biggest issues in modern IT world, which is Big Data processing, and offered a solution that allows to meet the constantly changing requirements and keep the work at high quality level.

Due to heavy use of cloud-native apps and services and the increase of investment in IoT, Time Series Platforms are on the rise now. These platforms can support the requirements for real-time data processing and analyze great amount of metrics in order for the companies to gain competitive advantage from all the data.

Because content is king these days, it’s crucial for the companies to identify value in the data and turn it into own advantage, and this solution from Influx Data without doubt deals with this task.

Influx overview

3D & VR/AR Development

Interaction Engine

Image title

Reach into Virtual Reality with Your Bare Hands

We’ve already written few posts on rise of VR and AR. So it’s no wonder a lot of companies are trying to surf the wave of hype and develop products that are related to these trends. Leap Motion turned out to be a front-runner with its Interaction Engine that got the DeveloperWeek award.

The Interaction Engine by Leap Motion allows users to work with the VR app by interacting with either physical or pseudo-physical objects. In other words, if your app has some kind of objects that need to be touched, moved, etc., the Interaction Engine can do a bit (or even all) of the work for you.

In addition to that (as if Leap Motion is not cool enough!) the company also enables users to summon and interact with the virtual objects from the distance. So instead of making you walk up to the object, you can touch or move it while standing not close to it. Indeed some next-level experience here.

LeepMotion

Enterprise Solutions

MarkLogic9

Image title

The Evolution of the Database

MarkLogic is a database for integrating the data from silos and the only existing NoSQL solution created specifically for enterprises. The company uses flexible and multi-model approach that can handle the data from any source with no problem at all. The database by MarkLogic includes built-in search in order to make your work process easier and it’s also 100% ACID compliant.

The latest company release is MarkLogic 9 that offers new data integration, increased security and many more features that altogether help companies get easier and actionable 360-view of their data. MarkLogic 9 is named as the most ambitious release by the company yet and is already recognized as a necessary tool for any company.

MarkLogic platform side view

Coding Frameworks/Libraries

npm Enterprise

Image title

Take Enterprise Development to New Heights

Npm is a package manager for JavaScript and the biggest software registry in the world. It is used to install, share and distribute code, manage project dependencies and share feedback. Products by npm are suitable for any projects and teams as they come in different sizes and packages, from browsing and installing public code to customizing support and SLAs.

Npm Enterprise enables you to run the npm’s infrastructure behind the firewall of your company. It is the same codebase that powers the public registry. The product provides the features necessary for large organizations and serves as a solution to multiple needs, like the ease of sharing private modules, control of workflow, enhanced security and much more.

npm setiings

Of course, these are not all DeveloperWeek winners, but they seemed the most interesting to me. Maybe they will inspire you to go out there and create something revolutionary yourself or maybe you can consider certain ideas for implementation into your own product. Nevertheless, the more you know, the better your business will grow in a modern IT environment and we will continue to bring you knowledge about the latest IT news.

Original Link

7 Mobile App Development Trends to Watch Out for in 2018

Swiftly moving away from the world of web apps and desktop accessibility, 2017 saw an upsurge in the number of users choosing mobile to be their primary option for internet access. Unlike what skeptics predicted, mobile app development was no bubble, nor was it an impermanent trend destined to run out, like an iPhone battery running on iOS 11. Mobile apps are a culture we have all grown accustomed to, from their presence in location-based apps, to their development on the spectrum of augmented and virtual reality.

In 2018, with respect to mobile app development, we will either be seeing brand new trends, or a huge upgrade from what users are already using. Before we dig into these trends, it is important to run through the anticipated hardware upgrades that will fuel the future of mobile app development.

The primary design of the smartphone is all set for an overhaul. Given the screen design introduced by Apple in their iPhone X, the likes of Samsung and its Android contemporaries in China will be looking to introduce devices without buttons. At the risk of sounding overly optimistic, one can hope for bendable designs to be made cost-effectively.

Of the 254-billion apps that are estimated to have been downloaded in 2017, over 90% would have been slaves to network speeds, especially in the markets of Africa and Asia where infrastructure remains an impending constraint. They need mobile app performance enhancement. With 5G network trials set to be underway across the world, the realm of app development will witness a change.  The integrated chips powering our smartphones are being improved as I write this, and in 2018, they are set to go a notch higher, thus allowing mobile app developers greater room to play, when it comes to app intricacies. Fans of augmented and virtual reality can already feel the surging adrenalin rush.

These enhancements in hardware are going to pave the way for mobile app development trends in 2018. For startups that have been hit by a plague of failures since 2016, this year could be a time of redemption, if it manages to ride high on the following ten trends in mobile app development.

1. Accelerated Mobile Pages to Find More Relevance

AMP listings were integrated into Google search in 2016, and since then developers have not looked back. Inculcating them within the app framework, developers have been able to use this boiled down version of HTML for better user experience and retention, with Facebook Instant Articles being one of the many success stories.

2. Demand for Wearable Devices and IoT to Rise

Thanks to Apple, the affordability constraint is out of the equation. App developers, starting in 2018, will be looking to develop apps for wearable devices, mostly watches. Currently, the likes of Zomato and Uber have invested in wearable app development, but like most, they only have scratched the mere surface.

3. Augmented and Virtual Reality Will Influence Mobile Strategies

Pokémon Go’ may have been a temporary storm on the eastern seaboard, but AR and VR are here to stay. Predicted to reap over $200 billion in revenues by 2020, developers are expected to create breathtaking mobile app experiences in AR and VR, and with compatible hardware entering the market, we can’t wait to get this party started.

4. More Businesses Will Invest in Cloud Integration

It took years but the world is finally waking up to the possibilities offered by cloud computing and integration. Streamlining operations, reduced costs in hosting, better storage, and loading capacity, along with increased user retention, are few of the many advantages of developing mobile apps over the cloud.

5. Mobile App Security to Gain Extra Attention

Yes, yes, we know it is not anything new, but with Uber coming out of the metaphoric closet and accepting the hacking, app developers will be looking to invest more in cybersecurity, given it is directly linked to users’ data privacy and protection laws. The finest minds in the industry will have to up the ante to drown out the uncertainty around mobile apps.

6. Predictive Analytics to Influence Mobile UI/UX

Mobile apps are going to move on from being mere utilities to being an integral part of your workflow. Giants like Facebook, Google, and Apple are already employing AI to use predictive analytics to enhance the customer journey across the UI/UX of the app, and 2018 is set to witness advances in this field.

7. Rising Popularity of On-Demand Apps

What was once being termed as an inevitable bubble in the realm of mobile app development, is now the future. With industries embracing the on-demand business model like an old friend, one can expect UI/UX enhancements, m-commerce facilities, predictive analysis, and business bots, to fuel the growth of Uber-like apps in 2018.

Summing Up

At the risk of sounding like a cliché, one cannot emphasize enough about smartphones being the future, given how trends in mobile app development have managed to captivate users across the globe for the last 3 years. As we usher in another year, inquisitiveness and excitement galore, the future of mobile app development resides in some of the finest brains in the business.

Original Link

What to Expect From VR in 2018

The world is changing rapidly thanks to two brothers – virtual and augmented reality. The debate about which one is the elder is for another day, but virtual reality is currently holding the reins in the technology world. It is considered to be one of the most important and powerful inventions of the world. The technology is expected to witness wide-scale adoption by many industry leaders in a few years.

While we saw many great use cases of VR in 2017, the future is anticipated to be even more exciting. Here is a list of things to expect from VR in 2018.

But first, here are a few statistics. By the end of 2016, VR revenues reached USD 3.5 billion globally, with over 50 million individuals using the technology. It is expected to rise to USD 4.6 billion in 2017, and reach about 170 million users by 2018! By 2020, the global VR market is expected to surpass USD 40 billion.

In 2017, we witnessed the release of numerous VR headsets and gear from industry giants like HTC, Oculus, Samsung, and Google. However, we feel that that was just the tip of the iceberg. 2018 will prove to be much more radical than 2017.

The development of VR can be applied to many disciplines such as gaming, entertainment, marketing, engineering, education, training, art, education, simulation, etc. The fascinating thing about VR is that the technology is evolving and being improved every minute!

VR in Business

VR headsets have reached classrooms, offices, hospitals, and malls. They are changing the way we perceive advertising and marketing. Business owners, with the help of VR, are able to provide better knowledge-based solutions, which are being well received by customers of all types.

With the amount of technological innovation that’s going on, VR will be in the top position in a few years. It has a plethora of applications, empower people to better understand the world, pacify kids when they are at the doctor’s, help unsatisfied job seekers blow off some stress, etc.!

Oculus Rift is already modernizing the entertainment and gaming industry with its revolutionary headsets that show fascinating images and landscapes in real-time.

3D films were considered revolutionary a few decades ago, however, movie buffs are expecting more from their movie experience. Enter 12D and 9D shows! These currently have their application in amusement parks, but are expected to penetrate the entertainment market soon.

Those trips to the museums and galleries that are often missed due to time constraints can now be fulfilled. Even short vacations can be converted with VR. VR provides a powerfully immersive experience of exotic and remote locations without having to physically visit them.

The healthcare industry has also adopted VR with fervor. Doctors wear VR headsets to understand the complexities of the patient’s organs and then perform the surgery. VR technology is expected to further this process as it is evolving rapidly. This will only lead to better healthcare and improved patient experience.

VR Trends

Beyond Vision and Sound!

VR currently involves only two senses – vision and sound. However, progress is being made to involve the other senses as well. Smell is the first other sense apart from sound and vision to be included in VR. We saw a glimpse of that at the 2017 Tokyo Game Show.

The addition of an extra element in VR opens the door for many applications across a wide range of industries.

This is just an example of how VR technology is including other sensory organs. Other massive improvements are expected in this regard.

Social Activities

Given the ubiquitousness of social media, VR has massive potential in most of the channels. Currently, Facebook is using its 3600 photos and videos to entice individuals. However, in a few years, this will be replaced with VR.

The affordability and popularity of VR will help companies push VR to the far corners of the world. Social media channels such as YouTube, Pinterest, Snapchat, Instagram, etc. are expected to be early of adopters of VR.

V-Commerce

It’s not a secret that VR has the potential to disrupt e-commerce. For when it does, v-commerce is the name that would be used to describe it. The idea here is that customers will have the chance to try clothes on using VR.

The current Asian commercial market has already adopted VR. Alibaba, the Chinese e-commerce giant, in 2016 introduced VR to its customers across China. With this, the company attracted over 30,000 shoppers in just a couple of days. A week later, the number of shoppers rose from 30,000 to 8 million.

Many other e-commerce businesses and retailers are expected to follow the footsteps of Alibaba in a years’ time. This helps them increase and improve their existing portfolio and increase their customer base.

Employment

There is currently a dire requirement for VR professionals across the world. This trend is only going to expand over the next year.

VR tech and the industry in general will see a tremendous rise in demand for professionals and experts in the VR development field. As the amount of content that is released will increase, the demand for qualified VR professional will increase. Subsequently, other industries such as advertising, marketing, and design will also be affected by the increase in demand for VR content.

Advertising

Advertising is one industry that constantly leverages the latest technology advertise effectively. VR is not going to be an exception. VR is the most effective tool to perfectly plant a brand in the customer’s mind.

VR advertising can easily take advantage of users’ social profiles and display relevant advertisements or content through a simulated reality. This is expected to have a massive impact on the customer’s experience and customer journey.

As the applications of VR across industries are too wide, the possibilities for advertising are virtually unlimited.

VR tech is only going to get better and better. Devices are also moving forward to accommodate VR. This is great news for tech companies in the VR ecosystem, marketers, and ultimately the consumers. Because the end consumers are involved in the loop, it pushes organizations to innovate and bring out new devices and software to the market.

Until a few years ago, the possibility of watching a 3D movie at home dazzled us. Now, we are able to completely immerse ourselves in a parallel world and tap into a completely different experience.

The commercial consequence of VR is going to be gigantic. Brands should start looking at VR as a potential marketing channel to stay relevant. They should take advantage of upcoming innovation and possess the edge over their competition.

Exactly telling the progress of VR or AR over the next few years is going to be difficult. But it is likely that the technology will become commonplace and developers will focus on it to enrich customer experiences.

What are your thoughts about VR technology trends for 2018? Do you think the aforementioned trends will define VR in 2018? Share your thoughts and suggestions by commenting below.

Original Link

Video: New VR arcade game is an 8-way firefight

The popular anime Ghost In The Shell is now an eight-person, all-out firefight in a VR arcade.

About Steven

Steven’s interested in ecommerce, mobile, smartphone adoption, gadgets, social media, transportation, and cars. If you have any tips or feedback, contact him on Twitter: @sirsteven

Original Link

ViroCore: SceneKit for Android Developers

Two New Products to Help Accelerate AR/VR Development on Android

  • ViroCore — a SceneKit equivalent for Android, enabling native AR/VR development using Java.
  • ViroReact: ARCore support — we added ARCore support for ViroReact. Developers can now build cross platform AR/VR apps across Android and iOS using a single code base.

Our mission at Viro is enabling AR/VR everywhere by building tools that simplify development. Enabling more developers to build AR/VR experiences will lead to a better, larger and more diverse ecosystem of apps. We started with ViroReact, our AR/VR platform for web and mobile developers leveraging React Native. With the launch of ARKit, we saw how Apple democratized AR development on iOS with SceneKit. We wanted to offer that same experience, native performance with descriptive API’s, to Android developers with ViroCore. (Read what XDA has to say about ViroCore.)

The Viro platform is free with no limits on distribution. Sign up for an API key and start building AR/VR apps today using ViroCore or ViroReact.

ViroCore

ViroCore is SceneKit for Android developers using Java. ViroCore combines a high-performance rendering engine with a descriptive API for creating immersive AR/VR apps. While lower-level APIs like OpenGL require you to learn and precisely implement rendering algorithms, ViroCore requires only high-level scene descriptions and the events and interactions you desire. Easily add animations, physics, particle effects, and more to your Android applications.

ViroCore is the perfect alternative to specialized game engines for building AR/VR apps. It allows companies to focus on what they do best, in the languages they know best, instead of training or hiring specialized 3D developers. ViroCore supports ARCore, Google Cardboard, Daydream and Gear VR.

ViroCore Hello World

With ViroCore, developers have access to a feature-rich platform necessary to build robust AR apps:

  • Create stunning scenes with HDR rendering, lighting, and shadows
  • Enable mixed reality with full support for immersive media such as 3D models (FBX and OBJ), 360 photos/videos and stereoscopic photos/videos
  • Add real-world mechanics to your objects with physics and animation
  • Emit smoke, fog, fire and other moving liquids with a full-featured particle system

You can build your first AR/VR app in minutes. Just sign up for a free API key and follow our easy Getting Started instructions. For more detail check our extensive development Guides and Javadoc API reference. We are excited to see what the Android community builds with ViroCore.

Arcore Support for Viroreact

ViroReact now supports ARCore, in addition to ARKit, making it fully cross-platform compatible for mobile AR development. Developers can use one code base for their AR/VR apps across iOS and Android. Current ViroReact developers, your ARKit apps should work out of the box on ARCore!

ViroReact brings the best features of React Native to AR/VR development: declarative API’s, flexible layouts, responsive components and cross-platform support. Viro enables fast and iterative development by offering testbed apps for iOS and Android, eliminating the need for Xcode or Android Studio while developing. Build immersive standalone AR apps or add features like Snapchat’s AR effects into existing apps with ViroReact.

Getting started with ViroReact is easy. Sign up for a free API key, then follow our Quick Start Guide to be set up in minutes. Check out our tutorials and code samples to start building your own app today.

How to build an interactive AR app in 5 mins

How to build AR Portals in 5 mins

Add Snapchat-like AR Lenses to any app

AR and VR Code Samples

We are excited to see the great AR/VR apps you build with ViroCore and ViroReact. Follow us on Twitter, Facebook and Instagram for updates and announcements.

Original Link

3 cool VR startups you can meet in South Korea this December

Image credit: Pixabay.

As virtual reality (VR) technology attracts both consumers and investors, a wide range of startups are taking AR/VR beyond headsets to give users a more authentic experience.

Scheduled to be held from November 30 until December 2 in Seoul, Korea, Startup Festival 2017 is one of the leading events in Asia’s startup scene. Hosted by the Ministry of SMEs and Startups, the festival’s agenda focuses on the technologies driving the Fourth Industrial Revolution, such as IoT, fintech, ICO, and AR/VR.

Here are three AR/VR startups you need to watch out for at the festival:

1. Locomotion platform/VR treadmill WizDish

Does VR make you sick, literally? Many people experience simulation sickness when playing VR games. If you’re seated but your visual cues signal you’re walking, then chances are you will experience dizziness, nausea, and other symptoms related to motion sickness.

Your body feels disconnected when what you see doesn’t match what you feel. The evolutionary explanation for this is that your body assumes that you’ve been poisoned, so it tries to induce vomiting to cleanse your system.

Wizdish’s ROVR is a VR treadmill that allows a person to walk and move freely in VR worlds. The treadmill listens to the sound made by sliding feet and converts this into forward motion in games. This feature allows you to fully engage with your game and matches the visual input with your physical stimuli. Less sickness, more motion.

2. 3D audio startup Kinicho

Imagine you’re walking through a cave. Your eyes would probably dart to the dark shadows on the wall and the faint light from your torch. Your ears would also likely perk up because sound is an important aspect of “being present” in any given environment. If a bat were to fly over your head, the flapping sound from its wings wouldn’t be the same as what you’d hear if it were to fly beside you.

Traditional audio recordings, however, can only account for the sound from one fixed point – where the microphone was placed.

Image credit: Kinicho.

With 3D audio, the virtual sound in headphones is designed to come as close as it can to sounds in the real world. 3D audio startup Kinicho’s novel approach to producing 3D spatial audio in VR/AR helps developers take better control of their soundtracks.

To deliver a more authentic VR experience, Kinicho’s method takes into account the spatial relationship involving listeners, sound emitters, and the environment in a virtual world.

3. Smartphone VR visor Altergaze

Funded via Kickstarter, Altergaze merges the concept of “crowd manufacturing” with AR/VR. The result is a 3D-printed, smartphone-based VR headset that offers an immersive 110-degree field of view (FOV) experience – and it comes in a compact and wireless package.

Using 3D printing technology to create a product offers a high level of customization. At the moment, Altergaze boasts of over 8.4 million unique variations depending on the design model, smartphone size, and color combinations. The visor looks vaguely like the goggles worn by the minions in the popular cartoon Despicable Me.

Image credit: Altergaze Kickstarter.

The headset is compatible with any smartphone, regardless of platform and display size. Moreover, it uses a device that almost everyone owns: a smartphone. Just slide it in the Altergaze headset, and you’re good to go.

Catch a glimpse of these three promising startups at the Startup Festival 2017. At the events ground, startups and VCs will have the opportunity to network and hold one-on-one consultations. There will also be an On-Air Zone where startups can gain media exposure.

Original Link

Day 6 of 100 Days of VR: Survival Shooter – Tutorial II

Today, on Day 6, we’re going to finish the rest of the Survival Shooter tutorial and finally move on to developing a simple game of my own!

Today, we will learn more about:

  • Creating the UI
  • Attacking and Moving for the player and the enemies
  • Raycasting
  • Creating more animations
  • …And more!

So let’s get started!

Health HUD

In the next part of the video series, we went on to create the health UI for the game for when the enemy attacks us.

Creating the Canvas Parent

The first thing we want to do is to create a new Canvas object on the hierarchy. We called it HUDCanvas.

We add a Canvas Group component to our Canvas. According to the documentation, anything we check in Canvas Group will persist to its child.

Specifically, we want to uncheck Interactable and Blocks a Raycast. We want to avoid the UI from doing any of these things.

Adding the Health UI Container

Next, we create an Empty GameObject as a child to our HUDCanvas. This will be the parent container for our Health UI. We’ll call it HealthUI.

What’s interesting to note is that, because it’s a child of the Canvas, we also have a Rect Transformcomponent attached to our Game Object.

Click on the Rect Transform and position our HealthUI to the bottom left corner of the game. Remember to hold alt + shift to move the anchor and the position!

Adding the Health Image

Next up, we create an Image UI as a child to the HealthUI. In the Image (Script) component, we just need to attach the provided Heart.png image.

You should see something like this in our scene tab:

And it should look something like this in our game tab:

Creating Our UI Slider

Next up, we need to create the HP bar that we use to indicate the HP that our player has.

We do that by creating a Slider UIGameObject as a child to our canvas. The Slider will come with children objects of its own. Delete everything, except for Fill Area.

Next, we want to make our HP. In the Slide GameObject, make the Max Value of 100 and set Value to also be 100.

Note: I was not able to get the slider to fit perfectly like the video did in the beginning. If you weren’t able to do so either, go to the Rect Transform of the slider and play with the positioning.

Adding a Screen Flicker When the Player Gets Hit

Next, we created an Image UI called DamageImage that’s a child of the HUDCanvas.

We want to make it fill out the whole canvas. This can be accomplished by going to Rect Transform, clicking the positioning box, and then clicking the stretch width and height button while holding alt + shift.

We also want to make the color opaque. We can do that by clicking on Color and moving the A (alpha) value to 0.

When you’re done with everything, your HUDCanvas should look something like this:

Player Health

Now that we have our Player Health UI created, it’s time to use it.

We attached an already created PlayerHealth script to our Player GameObject.

Here’s the code:

using UnityEngine;
using UnityEngine.UI;
using System.Collections;
using UnityEngine.SceneManagement; public class PlayerHealth : MonoBehaviour { public int startingHealth = 100; // The amount of health // the player starts the game with. public int currentHealth; // The current health the player has. public Slider healthSlider; // Reference to the UI's health bar. public Image damageImage; // Reference to an image to flash on the // screen on being hurt. public AudioClip deathClip; // The audio clip to play when the player dies. public float flashSpeed = 5f; // The speed the damageImage will fade at. public Color flashColour = new Color(1f, 0f, 0f, 0.1f); // The colour the damageImage // is set to, to flash. Animator anim; // Reference to the Animator component. AudioSource playerAudio; // Reference to the AudioSource component. PlayerMovement playerMovement; // Reference to the player's movement. //PlayerShooting playerShooting; // Reference to the PlayerShooting script. bool isDead; // Whether the player is dead. bool damaged; // True when the player gets damaged. void Awake () { // Setting up the references. anim = GetComponent <Animator> (); playerAudio = GetComponent <AudioSource> (); playerMovement = GetComponent <PlayerMovement> (); //playerShooting = GetComponentInChildren <PlayerShooting> (); // Set the initial health of the player. currentHealth = startingHealth; } void Update () { // If the player has just been damaged... if(damaged) { // ... set the colour of the damageImage to the flash colour. damageImage.color = flashColour; } // Otherwise... else { // ... transition the colour back to clear. damageImage.color = Color.Lerp (damageImage.color, Color.clear, flashSpeed * Time.deltaTime); } // Reset the damaged flag. damaged = false; } public void TakeDamage (int amount) { // Set the damaged flag so the screen will flash. damaged = true; // Reduce the current health by the damage amount. currentHealth -= amount; // Set the health bar's value to the current health. healthSlider.value = currentHealth; // Play the hurt sound effect. playerAudio.Play (); // If the player has lost all it's health and the death flag hasn't been set yet... if(currentHealth <= 0 &amp;&amp; !isDead) { // ... it should die. Death (); } } void Death () { // Set the death flag so this function won't be called again. isDead = true; // Turn off any remaining shooting effects. //playerShooting.DisableEffects (); // Tell the animator that the player is dead. anim.SetTrigger ("Die"); // Set the audiosource to play the death clip and play it // (this will stop the hurt sound from playing). playerAudio.clip = deathClip; playerAudio.Play (); // Turn off the movement and shooting scripts. playerMovement.enabled = false; //playerShooting.enabled = false; } public void RestartLevel () { // Reload the level that is currently loaded. SceneManager.LoadScene (0); } }

Like before, the video commented out some of the code, because we haven’t reached that point yet.

It’s important to note how the functions have been separated into modules that specify what everything does instead of stuffing everything inside Update().

Some things to note from our script:

Looking at Update()

Inside Update(),we create the damage flicker animation effect.

If the player gets damaged (the damaged Boolean becomes true), we set the DamageImage to a red color, then we change the damage Boolean to be false.

Afterwards, as we continue to call Update() on each frame, we would create a lerp that would help us transition from the damaged color back to the original color over time.

Taking Damage

How do we set damaged to be true? From TakeDamage()!

Notice the public in:

public void TakeDamage (int amount)

We’ve seen this before in the previous tutorial. As you recall, this means that we can call use this function whenever we have access to the script component.

Attaching the Components to the Script

The rest of the code is pretty well documented so I’ll leave it to you to read through the comment.

Before we move on, we have to attach the components to our script.

Creating the Enemy Attack Script

It was mentioned earlier that we have a public TakeDamage() function that allows other scripts to call. The question then is, which script calls it?

The answer: the EnemyAttack script. Already provided for us, just attach it to the player.

The code will look something like this:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ public float timeBetweenAttacks = 0.5f; // The time in seconds between each attack. public int attackDamage = 10; // The amount of health taken away per attack. Animator anim; // Reference to the animator component. GameObject player; // Reference to the player GameObject. PlayerHealth playerHealth; // Reference to the player's health. //EnemyHealth enemyHealth; // Reference to this enemy's health. bool playerInRange; // Whether player is within the trigger collider // and can be attacked. float timer; // Timer for counting up to the next attack. void Awake () { // Setting up the references. player = GameObject.FindGameObjectWithTag ("Player"); playerHealth = player.GetComponent <PlayerHealth> (); //enemyHealth = GetComponent<EnemyHealth>(); anim = GetComponent <Animator> (); } void OnTriggerEnter (Collider other) { // If the entering collider is the player... if(other.gameObject == player) { // ... the player is in range. playerInRange = true; } } void OnTriggerExit (Collider other) { // If the exiting collider is the player... if(other.gameObject == player) { // ... the player is no longer in range. playerInRange = false; } } void Update () { // Add the time since Update was last called to the timer. timer += Time.deltaTime; // If the timer exceeds the time between attacks, // the player is in range and this enemy is alive... if(timer >= timeBetweenAttacks &amp;&amp; playerInRange &amp;&amp; enemyHealth.currentHealth > 0) { // ... attack. Attack (); } // If the player has zero or less health... if(playerHealth.currentHealth <= 0) { // ... tell the animator the player is dead. anim.SetTrigger ("PlayerDead"); } } void Attack () { // Reset the timer. timer = 0f; // If the player has health to lose... if(playerHealth.currentHealth > 0) { // ... damage the player. playerHealth.TakeDamage (attackDamage); } }
}

Like before, some things aren’t commented in yet, however the basic mechanic for the function is:

  • Enemy gets near the player, causing the OnTriggerEnter() to activate and we switch the playerInRange Boolean to be true.
  • In our Update() function, if it’s time to attack in the enemy is in range, we call the Attack() function which then would call TakeDamage() if the player is still alive.
  • Afterwards, if the player has 0 or less HP, then we set the animation trigger to make the player the death animation.
  • Otherwise, if the player outruns the zombie and exits the collider, OnTriggerExit() will be called and playerInRange would be set to false, avoiding any attacks.

With that, we have everything for the game to be functional… or at least in the sense that we can only run away and get killed by the enemy.

Note:If the monster doesn’t chase you, make sure you attached the Player object with the Player tag, otherwise the script won’t be able to find the Player object.

Harming Enemies

In the previous video, we made the enemy hunt down and kill the player. We currently have no way of fighting back.

We’re going to fix this in the next video by giving HP to the enemy. We can do that by attaching the EnemyHealth script to our Enemy GameObject.

Here’s the script:

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public int startingHealth = 100; // The amount of health the enemy starts the game with. public int currentHealth; // The current health the enemy has. public float sinkSpeed = 2.5f; // The speed at which the // enemy sinks through the floor when dead public int scoreValue = 10; // The amount added to the player's score when the enemy dies. public AudioClip deathClip; // The sound to play when the enemy dies. Animator anim; // Reference to the animator. AudioSource enemyAudio; // Reference to the audio source. ParticleSystem hitParticles; // Reference to the particle system that plays // when the enemy is damaged. CapsuleCollider capsuleCollider; // Reference to the capsule collider. bool isDead; // Whether the enemy is dead. bool isSinking; // Whether the enemy has started sinking through the floor. void Awake () { // Setting up the references. anim = GetComponent <Animator> (); enemyAudio = GetComponent <AudioSource> (); hitParticles = GetComponentInChildren <ParticleSystem> (); capsuleCollider = GetComponent <CapsuleCollider> (); // Setting the current health when the enemy first spawns. currentHealth = startingHealth; } void Update () { // If the enemy should be sinking... if(isSinking) { // ... move the enemy down by the sinkSpeed per second. transform.Translate (-Vector3.up * sinkSpeed * Time.deltaTime); } } public void TakeDamage (int amount, Vector3 hitPoint) { // If the enemy is dead... if(isDead) // ... no need to take damage so exit the function. return; // Play the hurt sound effect. enemyAudio.Play (); // Reduce the current health by the amount of damage sustained. currentHealth -= amount; // Set the position of the particle system to where the hit was sustained. hitParticles.transform.position = hitPoint; // And play the particles. hitParticles.Play(); // If the current health is less than or equal to zero... if(currentHealth <= 0) { // ... the enemy is dead. Death (); } } void Death () { // The enemy is dead. isDead = true; // Turn the collider into a trigger so shots can pass through it. capsuleCollider.isTrigger = true; // Tell the animator that the enemy is dead. anim.SetTrigger ("Dead"); // Change the audio clip of the audio source to the death clip // and play it (this will stop the hurt clip playing). enemyAudio.clip = deathClip; enemyAudio.Play (); } public void StartSinking () { // Find and disable the Nav Mesh Agent. GetComponent <NavMeshAgent> ().enabled = false; // Find the rigidbody component and make it kinematic // (since we use Translate to sink the enemy). GetComponent <Rigidbody> ().isKinematic = true; // The enemy should no sink. isSinking = true; // Increase the score by the enemy's score value. ScoreManager.score += scoreValue; // After 2 seconds destroy the enemy. Destroy (gameObject, 2f); }
}

In a way, this is very similar to the PlayerHealth script that we have.

The biggest difference is that when the player dies, the games ends, however when the enemy dies, we need to somehow get them out of the game.

The flow of this script would go something like this:

  • We initialize our script in Awake()
  • Whenever the enemy takes damage via our public function: TakeDamage(), we play our special effects to show the enemy received damage and adjust our health variable
  • If the enemy’s HP ends up 0 or below, we run the death function which triggers the death animation and other death related code.
  • We call StartSinking() which will set the isSinking Boolean to be true.
  • You might notice that StartSinking() isn’t called anywhere. That’s because it’s being called as an event when our enemy animation finishes playing its death clip. You can find it under Events in the Animationsfor the Zombunny.

  • After isSinking is set to be true, our Update() function will start moving the enemy down beneath the ground.

Moving to the Player

Our enemy has HP now. The next thing we need to do is to make our player character damage our enemy.

The first thing we need to do is some special effects.

We need to copy the particle component on the GunParticles prefab…

and pass that into the GunBarrelEnd Game Object which is the child of Player.

Next, still in GunBarrelEnd, we add a Line Renderer component. This will be used to draw a line, which will be our bullet that gets fired out.

For a material, we use the LineRendereredMaterial that’s provided for us.

We also set the width of our component to 0.05 so that the line that we shoot looks like a small assault rifle that you might see in other games.

Make sure to disable the renderer as we don’t want to show this immediately when we load.

Next, we need to add a Light component. We set it to be yellow.

Next up, we attach player gunshot as the AudioSource to our gun.

Finally, we attach the PlayerShooting script that was provided for us to shoot the gun. Here it is:

using UnityEngine; public class PlayerShooting : MonoBehaviour
{ public int damagePerShot = 20; // The damage inflicted by each bullet. public float timeBetweenBullets = 0.15f; // The time between each shot. public float range = 100f; // The distance the gun can fire. float timer; // A timer to determine when to fire. Ray shootRay; // A ray from the gun end forwards. RaycastHit shootHit; // A raycast hit to get information about what was hit. int shootableMask; // A layer mask so the raycast only hits things // on the shootable layer. ParticleSystem gunParticles; // Reference to the particle system. LineRenderer gunLine; // Reference to the line renderer. AudioSource gunAudio; // Reference to the audio source. Light gunLight; // Reference to the light component. float effectsDisplayTime = 0.2f; // The proportion of the timeBetweenBullets // that the effects will display for. void Awake () { // Create a layer mask for the Shootable layer. shootableMask = LayerMask.GetMask ("Shootable"); // Set up the references. gunParticles = GetComponent<ParticleSystem> (); gunLine = GetComponent <LineRenderer> (); gunAudio = GetComponent<AudioSource> (); gunLight = GetComponent<Light> (); } void Update () { // Add the time since Update was last called to the timer. timer += Time.deltaTime; // If the Fire1 button is being press and it's time to fire... if(Input.GetButton ("Fire1") &amp;&amp; timer >= timeBetweenBullets) { // ... shoot the gun. Shoot (); } // If the timer has exceeded the proportion of // timeBetweenBullets that the effects should be displayed for... if(timer >= timeBetweenBullets * effectsDisplayTime) { // ... disable the effects. DisableEffects (); } } public void DisableEffects () { // Disable the line renderer and the light. gunLine.enabled = false; gunLight.enabled = false; } void Shoot () { // Reset the timer. timer = 0f; // Play the gun shot audioclip. gunAudio.Play (); // Enable the light. gunLight.enabled = true; // Stop the particles from playing if they were, then start the particles. gunParticles.Stop (); gunParticles.Play (); // Enable the line renderer and set it's first position to be the end of the gun. gunLine.enabled = true; gunLine.SetPosition (0, transform.position); // Set the shootRay so that it starts at the end of the gun and points forward from the barrel. shootRay.origin = transform.position; shootRay.direction = transform.forward; // Perform the raycast against gameobjects on the shootable layer and if it hits something... if(Physics.Raycast (shootRay, out shootHit, range, shootableMask)) { // Try and find an EnemyHealth script on the gameobject hit. EnemyHealth enemyHealth = shootHit.collider.GetComponent <EnemyHealth> (); // If the EnemyHealth component exist... if(enemyHealth != null) { // ... the enemy should take damage. enemyHealth.TakeDamage (damagePerShot, shootHit.point); } // Set the second position of the line renderer to the point the raycast hit. gunLine.SetPosition (1, shootHit.point); } // If the raycast didn't hit anything on the shootable layer... else { // ... set the second position of the line renderer // to the fullest extent of the gun's range. gunLine.SetPosition (1, shootRay.origin + shootRay.direction * range); } }
}

The flow of our script is:

  • Awake() to initialize our variables
  • In Update(), we wait for the user to left click to shoot, which would call Shoot()
  • In Shoot(), we create a Raycast that will go straight forward until it either hits an enemy or a structure, or it reaches the max distance we sit it. From there, we create the length of our LineRenderer from the gun to the point we hit.
  • After a couple more frames in Update(), we will disable the LineRenderer to give the illusion that we’re firing something out.

At this point, we have to do some cleanup work. We have to go back to the EnemyMovement script and uncomment the code that stops the enemy from moving when either the player or it dies.

The changes are highlighted:

using UnityEngine;
using System.Collections; public class EnemyMovement : MonoBehaviour
{ Transform player; PlayerHealth playerHealth; EnemyHealth enemyHealth; UnityEngine.AI.NavMeshAgent nav; void Awake () { player = GameObject.FindGameObjectWithTag ("Player").transform; playerHealth = player.GetComponent <PlayerHealth> (); enemyHealth = GetComponent <EnemyHealth> (); nav = GetComponent <UnityEngine.AI.NavMeshAgent> (); } void Update () { if(enemyHealth.currentHealth > 0 &amp;&amp; playerHealth.currentHealth > 0) { nav.SetDestination (player.position); } else { nav.enabled = false; } }
}

After all of this is done, we have a playable game!

Note: if you start playing the game and try shooting the enemy and nothing happens. Check if the enemy’s Layer is set to Shootable.

Scoring Points

At this point, we have a complete game! So what’s next? As you can guess from the next video, we’re creating a score system.

We end up doing something similar to what has been done before with the previous 2 video tutorials where we put a UI Text on the screen.

Anchor

With that being said, we create a UI Text in our HUDCanvas. We set the RectTransform to be the top. This time we want to just set the anchor by clicking without holding shift + ctrl.

Font

Next, in the Text component, we want to change the Font Style to LuckiestGuy, which was a font asset that was provided for us

Add Shadow Effect

Next up, we attach the shadow component to our text to give it a cool little shadow. I’ve played around with some of the values to make it look nice.

Adding the ScoreManager

Finally, we need to add a script that would keep track of our score. To do that, we’ll have to create a ScoreManager script, like the one provided for us:

using UnityEngine;
using UnityEngine.UI;
using System.Collections; public class ScoreManager : MonoBehaviour
{ public static int score; // The player's score. Text text; // Reference to the Text component. void Awake () { // Set up the reference. text = GetComponent <Text> (); // Reset the score. score = 0; } void Update () { // Set the displayed text to be the word "Score" followed by the score value. text.text = "Score: " + score; }
}

This code is pretty straightforward. We have a score variable and we display that score in Unity, in every Update() call.

So where will score be updated? It won’t be in the ScoreManager, it’ll be whenever our enemy dies. Specifically, that’ll be in our EnemyHealth Script.

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public int startingHealth = 100; // The amount of health the enemy starts the game with. public int currentHealth; // The current health the enemy has. public float sinkSpeed = 2.5f; // The speed at which the enemy // sinks through the floor when dead. public int scoreValue = 10; // The amount added to the player's score when the enemy dies. public AudioClip deathClip; // The sound to play when the enemy dies. Animator anim; // Reference to the animator. AudioSource enemyAudio; // Reference to the audio source. ParticleSystem hitParticles; // Reference to the particle system that plays // when the enemy is damaged. CapsuleCollider capsuleCollider; // Reference to the capsule collider. bool isDead; // Whether the enemy is dead. bool isSinking; // Whether the enemy has started sinking through the floor. void Awake () { // Setting up the references. anim = GetComponent <Animator> (); enemyAudio = GetComponent <AudioSource> (); hitParticles = GetComponentInChildren <ParticleSystem> (); capsuleCollider = GetComponent <CapsuleCollider> (); // Setting the current health when the enemy first spawns. currentHealth = startingHealth; } void Update () { // If the enemy should be sinking... if(isSinking) { // ... move the enemy down by the sinkSpeed per second. transform.Translate (-Vector3.up * sinkSpeed * Time.deltaTime); } } public void TakeDamage (int amount, Vector3 hitPoint) { // If the enemy is dead... if(isDead) // ... no need to take damage so exit the function. return; // Play the hurt sound effect. enemyAudio.Play (); // Reduce the current health by the amount of damage sustained. currentHealth -= amount; // Set the position of the particle system to where the hit was sustained. hitParticles.transform.position = hitPoint; // And play the particles. hitParticles.Play(); // If the current health is less than or equal to zero... if(currentHealth <= 0) { // ... the enemy is dead. Death (); } } void Death () { // The enemy is dead. isDead = true; // Turn the collider into a trigger so shots can pass through it. capsuleCollider.isTrigger = true; // Tell the animator that the enemy is dead. anim.SetTrigger ("Dead"); // Change the audio clip of the audio source to the death clip and play it // (this will stop the hurt clip playing). enemyAudio.clip = deathClip; enemyAudio.Play (); } public void StartSinking () { // Find and disable the Nav Mesh Agent. GetComponent <NavMeshAgent> ().enabled = false; // Find the rigidbody component and make it kinematic // (since we use Translate to sink the enemy). GetComponent <Rigidbody> ().isKinematic = true; // The enemy should no sink. isSinking = true; // Increase the score by the enemy's score value. ScoreManager.score += scoreValue; // After 2 seconds destroy the enemy. Destroy (gameObject, 2f); }
}

And that’s it! Now we can get a grand total score of… 1. But we’ll fix that in the next video when we add more enemies.

Creating a Prefab

Before we move on to the next video, we made a prefab of our enemy. Like we saw in previous videos, prefabs can be described as a template of an existing GameObject you make.

They’re handy for making multiple copies of the same thing… like multiple enemies!

Spawning

In this upcoming video, we learned how to create multiple enemies that would chase after the player.

The first thing to done was to create the Zombear.

To be re-usable, if you have enemy models that have similar animations like the Zombear and Zombunny, you can re-use the same animations.

However, I was not able to see any animation clips for the Zombear so… I decided to just skip this part.

Then at that point, I got into full-blown laziness and decided to skip the Hellephant too.

However, some important thing to note was that if we have models that have the same types of animation, but different models, we can create an AnimatorOverrideController that takes in an AnimtorControllerwhich uses the same animation clips.

EnemyManager

So after our… brief attempt at adding multiple types of enemies, we have to somehow create a way to spawn an enemy.

To do this, we create an empty object which we’ll call EnemyManager in our hierarchy.

Then, we attach the EnemyManager script provided to it:

using UnityEngine; public class EnemyManager : MonoBehaviour
{ public PlayerHealth playerHealth; public GameObject enemy; public float spawnTime = 3f; public Transform[] spawnPoints; void Start () { InvokeRepeating ("Spawn", spawnTime, spawnTime); } void Spawn () { if(playerHealth.currentHealth <= 0f) { return; } int spawnPointIndex = Random.Range (0, spawnPoints.Length); Instantiate (enemy, spawnPoints[spawnPointIndex].position, spawnPoints[spawnPointIndex].rotation); }
}

The flow of this code is:

  • In Start(), we call InvokeRepeating to call the method “Spawn” starting in spawnTime and then repeating every spawnTime, with spawnTime being 3 seconds
  • Inside Spawn(), we would randomly create an enemy from the array of spawnPoints. However, in this case, we only have 1 location. It was made into an array for re-usability purposes.

And that’s it!

But before we move on, we have to create the spawn point.

We created a new empty object: Zombunny Spawn Point and I set it at:

  • position: (-20.5, 0, 12.5)
  • Rotation (0, 130, 0)

And then from there, just drag the Zombunny Spawn Point to the spawnPoint label inside the EnemyManagerscript to add the GameObject to our array.

If we followed the video perfectly, we’d have multiple Spawn points that would be hard to tell the difference between.

Unity has an answer for that.

We create add a label by clicking on the colored cube in the inspector in your Game Object and select a color:

Play the game and now you should see an endless wave of Zombunny coming at you! Now we’re really close to having a full game!

Gameover

In the final video in this tutorial, we create a more fluid game over state for the player.

Currently, when the player dies, all that happens is that we reload the game and the player starts over. We’re going to do better and add some nifty UI effects!

The first thing we want to do is create an Image UI that we’ll call screenFader. We set the color of the Imageto be black and the alpha to be 0. Later on, we create a transition to change the Alpha of the Image so that we’ll have an effect of fading into the game.

Next, we created a Text UI called GameOverText to show to the player that the game is over.

At this point, we have to make sure that we have this ordering inside our HUDCanvas:

  • HealthUI
  • DamageImage
  • ScreenFader
  • GameOverText
  • ScoreText

It’s important that we have this ordering, as the top element on the list will be placed in the screen first.

If we were to stack everything on top of each other, our HealthUI would be at the bottom and the ScoreTextwould be on the top.

Creating an Animation

Now that we have all the UI elements in place, we want to create a UI animation.

The first thing we need to do is go to Unity > Window > Animation selecting HUDCanvas to create a new animation using the objects that are attached to HUDCanvas.

Click Create a new clip and make a new clip called GameOverClip.

Click Add Property and select:

  • GameOverText > Rect Transform > Scale
  • GameOverText > Text > Color
  • ScoreText > Rect Transform > Scale
  • ScreenFader > Image > Color

This will add these 4 properties to our animation.

How animation works is that you start at some initial value as represented in the diamond:

When you double click in the timeline of the effects, you create a diamond for a property.

When you move the white line slider to the diamond, and select it, you can change the value of the property in the inspector that the game object will be at in that specific time in the animation.

Essentially, the animation will make gradual changes from the 1stdiamond to the 2nddiamond. Or from the original location to the diamond.

An example is: at 0:00 if X scale is 1 and at 0:20 X scale is 2, at 0:10, X scale will be 1.5

So follow what was done in the above picture.

  • GameOverText : Scale – We want to create a popping text, where the text appears disappears, and then pops back.
    • 0:00 Scales are all 1
    • 0:20 Scales are all 0
    • 0:30 Scales are all 1
  • GameOverText : Text.Color – We want to create white text that gradually fades in.
    • 0:00 color is white with alpha at 0
    • 0:30 color is white with alpha at 255
  • ScoreText: Scale – we want the score to shrink a bit
    • 0:00 scales are all 1
    • 0:30 scales are all 0.8
  • ScreenFader : Image.Color – We want to gradually make a black background show up
    • 0:00 color is black with alpha 0
    • 0:30 color is black with alpha 255

When we create an animation, Unity will already create an Animator Controller with the name of the object we created the animation for us (HUDCanvas).

Setting Up Our HudCanvas Animator Controller

In the HudCanvas animator controller, we create 2 New State.

One will act as a main transition and the other we’ll name it GameOver.

We also create a new trigger called GameOver.

We make the New Stateour main transition. From there, we create a transition from New Stateto GameOverwhen the trigger GameOver is enabled.

We should have something like this after you’re done:

Save our work, and then we’re done!

Note:When we create an Animation from HUDCanvas, it would add the animator controller to it. If it doesn’t, manually create an Animator component to HUDCanvas and attach the HUDCanvasAnimator Controller.

Creating a GameOverManager to Use Our Animation

Finally, in the last step, we need to create some code that will use our animation that we just created when the game is over.

To do this, we just add the provided GameOverManager script to our HUDCanvas. Here’s the code:

using UnityEngine; public class GameOverManager : MonoBehaviour
{ public PlayerHealth playerHealth; // Reference to the player's health. public float restartDelay = 5f; // Time to wait before restarting the level Animator anim; // Reference to the animator component. float restartTimer; // Timer to count up to restarting the level void Awake () { // Set up the reference. anim = GetComponent <Animator> (); } void Update () { // If the player has run out of health... if(playerHealth.currentHealth <= 0) { // ... tell the animator the game is over. anim.SetTrigger ("GameOver"); // .. increment a timer to count up to restarting. restartTimer += Time.deltaTime; // .. if it reaches the restart delay... if(restartTimer >= restartDelay) { // .. then reload the currently loaded level. Application.LoadLevel(Application.loadedLevel); } } }
}

The basic flow of the code is:

  • We initialize our Animator by grabbing the Animator component that is attached to our game object inside Awake()
  • Inside Update(), we’ll always check to see if the player is alive, if he’s not, we play the GameOveranimation and set a timer so that after our clip is over, we would restart the game.

Conclusion

Phew, this has really drawn out long past 2 days.

The only reason why I decided to follow through is:

  • There’s a lot of good learning that happens when you have to write
  • Most likely, this will be the last of the long articles. From now on, I’ll be going on by myself to create a simple game and progress will be much slower as I try to Google for my answer.

There were a lot of things that we saw again, but even more things that we learned.

We saw a lot of things that we already knew like:

  • The UI system
  • Colliders
  • Raycasts
  • Navigating Unity

…And then we saw a lot more things that we have never seen before like:

  • Character model animations
  • Management scripts to control the state of the game
  • Creating our own UI animation
  • Using Unity’s built in AI

It’s only Day 6 of our 100 days of VR, please end me now, I’m going to collapse on my bed now.

I’ll see you back for day 7 where I start trying to develop my own simple game.

Read Day 5, the previous chapter of this tutorial.

Original Link

What Is the Best Phone to Get for VR Development?

I know, you’re super excited to finally have our first look at what VR Development will, I am too, unfortunately, I’ve run into a real problem.

My current mobile device is a Note 4…

I know you’re thinking: “For a guy who’s working in tech, he sure has an outdated device!”

What can I say? Phones are expensive! I can’t afford them!

However, seeing as I’m trying to break into the mobile VR space, there are 3 types of HMD (Head Mounted Displays):

  • Google Cardboard
  • Gear VR
  • Google Daydream View

Of these 3, Google Cardboard is the poor man’s VR device ($10~) which is supported by pretty much every major phone since 2012 and then we move on to the higher end of the spectrum and we have the Gear VR and the Google Daydream at around $100.

A Google Cardboard doesn’t support any controllers and relies on our gaze and a button click; the other two devices have a controller that gives us more freedom.

While I could develop for the Google Cardboard, the exciting thing I want to try is the higher end VR headgears. Either the Gear VR or the Google Daydream Viewer, so with that said, it’s time to upgrade my phone! Goodbye money! You’ll be sorely missed!

The question now is…. which head display do I want to develop in?

Here’s the result of my investigation!

First up is the Gear VR manufactured by Samsung and powered by Oculus.

  • Release Date: November 27, 2015
  • Cost: $129.99 (HMD + controller)
  • Supported SDK: Oculus
  • Supported controller: Controller, Touchpad
  • Apps Available: According to their own site, there are 800+ apps for Gear VR of the time of this writing.
  • Total Devices Sold: 5 million in 2016.

Supported Phones

Now here’s the most important part: what type of phone would I need?

According to my favorite resource in the world, Wikipedia, and some other sites, supported devices from oldest to newest is:

  • Samsung Galaxy Note 4*
  • Galaxy S6
  • Galaxy S6 Edge
  • Galaxy S6 Edge+
  • Samsung Galaxy Note 5
  • Galaxy S7
  • Galaxy S7 Edge
  • Galaxy S8
  • Galaxy S8+
  • Samsung Galaxy Note 8

* the Galaxy Note 4 DOES support Gear VR, however only the HMD, not the controllers. It also overheats a lot.

Samsung is a huge flagship and as time goes on, more and more people will upgrade to newer Samsung phones that will support Gear VR.

Another important detail is that from the Galaxy S8 and onward, all device also supports the Google Daydream View.

The phones to consider are:

  • Cheapest supported phone: Galaxy S6 ($200~)
  • Cheapest phone that supports Gear VR and Google Daydream: Galaxy S8 ($600~)

Development Kit

Gear VR is powered by Oculus. Looking at their documentation, outside of going native, the primary two game engines that Oculus support are, you guessed it: Unity and Unreal Engine.

For our case, it looks like the Oculus provides a nice starter guide with samples that teach you how to use their tools. There are also a lot of other tutorials that are out there, like this one from Unity.

App Store Submission

Developers for Gear VR will have to submit their apps to the Oculus App Store. The submission process requires an approval process where editors review each app submission to make sure the apps meet the bare minimum requirements.

We can think of Oculus as the apple of VR apps.

Next up, we have the Google Daydream View, which is not to be confused with the new Standalone Google Daydream headset that doesn’t require any mobile devices.

  • Release Date: November 10, 2016
  • Cost: $71.99 (HMD + controller)
  • Supported SDK: Daydream
  • Supported controller: Controller
  • Apps available: According to this source, there are 153 apps out since March 2017.
  • Total Devices Sold: 260k in 2016. Note that there were only two months left in 2016. However, a gaming analyst company (SuperData) projects 6.8 million sold by the end of 2017- realistic or not, only time will tell.

Supported Phones

The requirement for the Google Daydream are on the higher end. According to their own site. The supported phones are:

Many of these phones are on the higher end of the price spectrum as I recall, the primary reason is that Google Daydream requires more powerfull phones to support their VR experience.

Of them all:

  • The cheapest phones are the: Moto Z and Axon 7 (both at $400~)
  • The cheapest (and only) phone that supports Gear VR: Galaxy S8 ($600~)

Development Kit

The Google Daydream View is supported by their Google VR SDK. Just like the Oculus, there’s support for:

In our case, our main interest is in Unity. Google provides their own documentation and tutorials and Unity also have some of their own documentation.

However, I was not able to find as many comprehensive tutorials for the Daydream View compared to the Gear VR.

My guess for the reason why we don’t see as many apps and tutorials for the Daydream is because there aren’t many devices that support Daydream.

What this means is that there aren’t a large enough audience to incentivize developers to work on making Daydream apps and create tutorials for it.

However, if we were to talk about the Google Cardboard, that’d be a completely different story!

App Store Submission Process

Like the Oculus Store, Google has their own specific Daydream app store where you can find all the Daydream apps available.

When submitting your app to Google, you would also go through a manual reviewing process where editors make sure that your app meets all the standards.

The good news is that even if your application gets rejected for the Daydream store, your app will still be published in the normal play store.

Which Phone Should I Get?

After researching all options, it’s time to decide on which phone to get.

If you’re looking for a budget phone that supports one device, there are “cheap” alternatives for both the Gear VR ($200~) and Google Daydream ($400~), however if you want to be future proof for both head displays the cheapest option is the Samsung Galaxy S8 ($600).

An important note to make is that we don’t really need a phone that can support both. Realistically speaking, the best choice might be to focus on one platform and then once you have success there, then consider getting a phone that supports the other platform.

However, I want to be that VR guy so I decided to order myself a Samsung Galaxy S8.

Pros and Cons of Each Platform

We’re currently at a fork in the road for our VR development. We must make a conscious decision on which platform to develop for Oculus or Google.

Oculus

Pros of Gear VR:

  • Larger audience due to support for older Samsung phones.
  • More available documentation/tutorials.

Cons of Gear VR:

  • If you can’t get your app approved, you’re finished.

Google

Pros of Daydream Viewer:

  • Even if your app gets rejected, it can still be put into the normal Play Store.
  • Shares similar SDK with the Google Cardboard so we can build a Cardboard app first and then add in Daydream features to it afterward.

Cons of Daydream Viewer:

  • The oldest supported phone for Google Daydream is from 2016. The supported audience size is far smaller than Gear VR’s, 260k vs 5 million. Of course, we’ll see how they compare when the 2017 numbers are released.

At this point, it seems that working with the Gear VR might be better due to an immediate larger audience size and availability of help.

However, an important component to consider for Google’s platform is the Google Cardboard.

Pros of Google Cardboard:

  • Cheap, easy to get, and at this point, supported by most smartphone owners (i.e. large audience). Estimated to be around 10 million sold devices in 2016.
  • A lot of available documentation and help.
  • Shares the same SDK with Daydream.
  • Supports both Android and iOS devices!
  • Apps are all in the Google Play Store where most users are already familiar with.

Cons of Google Cardboard:

  • Doesn’t support controllers like the Daydream Viewer or Gear VR.
  • Is not as high quality of an experience compared to the other two.

Which Platform Should I Develop For?

The answer depends on what you want to do!

If you want to create a high-end mobile VR app that reaches a larger audience, you should consider the Gear VR.

However, if you’re willing to wait/invest for the future, then Google might be a good play.

Currently, in Google’s platform, we can take advantage of the shared SDK between the Cardboard and the Daydream to create an app for all the cardboard users and then enhance it to use a controller for the Daydream users.

The main problem with the Daydream is the smaller number of users that have a Daydream (and a supported device)

In the future, we’ll eventually reach a point where most people will have upgraded their phones to one that is Daydream ready. At this point, the Google Daydream Viewer will reach the same level of availability as the Gear VR.

The big question is in that point in the future will Gear VR already entrenched itself as the platform to develop in. Realistically, I think with Unity’s support for VR, it *should* be easy to adapt a Gear VR app to be a Daydream app and vice-versa so we can’t go wrong either way.

My Decision

Considering all these facts, I’m going to make a bet with Google’s platform with my assumptions being:

  • There will be more Daydream ready devices that people will eventually upgrade to and…
  • There was a decent amount of purchases of Daydream Viewers in 2017.

With that said, I have made my purchase of a new Galaxy S8 and tomorrow I’ll start looking into working with a Google Cardboard!

Original Link

Day 4 of 100 Days of VR: Going Through the Unity Space Shooter Tutorial III

Here we are on Day 4. Today, we’re going to finish the Unity Space Shooter by adding the remaining UI, the enemy spawning system, and then creating an endgame state.

So without any more delays, let’s get started!

Audio

To create audio for the game, we need to add an Audio Source component to our GameObject.

Inside the Audio Source component, we add our music file to the AudioClip slot.

A quick and easy way to add the Audio Source component is to just drag your music file to your GameObjectthat you wish to add.

There are a lot of controls that we can use. The most important in this case being:

  • Play On Awake – Play the sound when the GameObject that this AudioSource is connected to is created.

  • Loop – Repeats playing the music when it finishes.

  • Volume – Self-explanatory.

Adding Explosion Sounds to Asteroids

The audio samples are already all provided for us in the tutorial, so all we had to do was follow along and attach the explosion sound effects to the Explosion GameObject.

Afterwards, whenever an asteroid is destroyed, the explosion sound effect gets played.

On a side note, I want to mention that I really enjoy using Unity’s Component system. Normally, you have to manually code everything, but in Unity, it’s as easy as drag and drop!

Adding Shooting Sounds to the Bullets

For the bullet’s sound effects, the video had us use an existing script to play the sound of our bullets being fired.

However, I believe we could have just as easily attached an AudioSource to our bullet prefab like we did with the explosion and achieve the same thing when we instantiate the bullet.

However, this is something good to know, so let’s see the code:

using UnityEngine;
using System.Collections; [System.Serializable]
public class Boundary
{ public float xMin, xMax, zMin, zMax;
} public class PlayerController : MonoBehaviour
{ public float speed; public float tilt; public Boundary boundary; public GameObject shot; public Transform shotSpawn; public float fireRate; private float nextFire; void Update () { if (Input.GetButton("Fire1") && Time.time > nextFire) { nextFire = Time.time + fireRate; Instantiate(shot, shotSpawn.position, shotSpawn.rotation); GetComponent&lt;AudioSource&gt;().Play (); } } void FixedUpdate () { float moveHorizontal = Input.GetAxis ("Horizontal"); float moveVertical = Input.GetAxis ("Vertical"); Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical); rigidbody.velocity = movement * speed; rigidbody.position = new Vector3 ( Mathf.Clamp (rigidbody.position.x, boundary.xMin, boundary.xMax), 0.0f, Mathf.Clamp (rigidbody.position.z, boundary.zMin, boundary.zMax) ); rigidbody.rotation = Quaternion.Euler (0.0f, 0.0f, rigidbody.velocity.x * -tilt); }
}

The only addition is the highlighted part:

GetComponent<AudioSource>().Play ();

We use GetComponent to search for our AudioSource component through all components attached to the game object. If it finds it we get an instance of it, which we call Play() to play the sound attached to it.

Adding Background Music

Adding the background music is straightforward. We just attach Background music as an AudioSource component to the game controller game object and set the Audio Source component to loop.

Counting Points and Displaying the Score

GUIText vs. UI

In the next section, we create a UI to show the score on the game.

The video had us create an empty game object and then attach a GUIText component to it. As you might have recalled, back in the Roll-A-Ball tutorial, we used UI

From my own research, GUIText is the old way Unity used to show Text, the Canvas system is the new way to implement UIs.

One of the many benefits of using the Canvas system is so that we can anchor ourselves to specific corners as we have seen in the roll-a-ball tutorial. If we were to use GUIText, we have to manually move them ourselves.

Creating Our Score and Calling Other Scripts…in Scripts!

Now that we have our GUI available, the next thing that needs to be done is to figure out how to get the Component and edit it. Luckily, if you’ve been following along, we should have an idea on how to do it!

We use GetComponent and grab the component that we attached to our GameObject!

The next question then is: which script should we attach the GameObject to? Well, technically speaking, the easiest thing might be the Asteroids: DestroyByContact script, because that’s when we know we scored.

However, this brings up multiple complications:

  • We would generate multiple asteroids all of which would have the same code. If we’re to keep track of a total score, each asteroid object would start at 0 and then when it gets destroyed, we would change our text to be 1. Every. Single. Time.
  • From my instincts of a programmer, the DestroyByContact script shouldn’t be the one in charge of keeping our scores, we need a manager of some sort that keeps track of the overall state of the game, or maybe… a controller!

And as we’ll soon see in the video, we’re right. All of the logic is added into the GameController script as you can see here:

using UnityEngine;
using System.Collections; public class GameController : MonoBehaviour
{ public GameObject hazard; public Vector3 spawnValues; public int hazardCount; public float spawnWait; public float startWait; public float waveWait; public GUIText scoreText; private int score; void Start () { score = 0; UpdateScore (); StartCoroutine (SpawnWaves ()); } IEnumerator SpawnWaves () { yield return new WaitForSeconds (startWait); while (true) { for (int i = 0; i &lt; hazardCount; i++) { Vector3 spawnPosition = new Vector3 (Random.Range (-spawnValues.x, spawnValues.x), spawnValues.y, spawnValues.z); Quaternion spawnRotation = Quaternion.identity; Instantiate (hazard, spawnPosition, spawnRotation); yield return new WaitForSeconds (spawnWait); } yield return new WaitForSeconds (waveWait); } } public void AddScore (int newScoreValue) { score += newScoreValue; UpdateScore (); } void UpdateScore () { scoreText.text = "Score: " + score; }
}

What do we have here? Our GameController script keeps track of our score, and we only have one instance of it, so we don’t have to worry about the problem discussed above with multiple instances. We’ll see that we attach our GUIText to the script. We added UpdateScore() to initialize our text starting state.

But wait! How do we update our score whenever we destroy an asteroid? We’ll soon see.

Note that we have a public void addScore(). What does it mean for a function to be public? It means that if someone has access to our Script component, they can use the function.

Looking at the DestroyByContact code, that’s exactly what’s being done!

using UnityEngine;
using System.Collections; public class DestroyByContact : MonoBehaviour
{ public GameObject explosion; public GameObject playerExplosion; public int scoreValue; private GameController gameController; void Start () { GameObject gameControllerObject = GameObject.FindWithTag ("GameController"); if (gameControllerObject != null) { gameController = gameControllerObject.GetComponent<GameController>(); } if (gameController == null) { Debug.Log ("Cannot find 'GameController' script"); } } void OnTriggerEnter(Collider other) { if (other.tag == "Boundary") { return; } Instantiate(explosion, transform.position, transform.rotation); if (other.tag == "Player") { Instantiate(playerExplosion, other.transform.position, other.transform.rotation); } gameController.AddScore (scoreValue); Destroy(other.gameObject); Destroy(gameObject); }
}

Looking back at our DestroyByContact code, we used Start() to grab the first instance of our GameController object that exists.

We can do this by first setting the Tag: “GameController” on our GameController object.

The GameObject class that we use, just like the Math library, contain static functions that are available to it, meaning we can use them anytime we want.

In this case: FindWithTag() is a static function available to use that helps us search for the GameObject with the tag “GameController.”

FindWithTag() returns the GameObject if it finds it, otherwise it returns a null object. That’s why we have to first check if the object we get back is null or not, because if we try to do anything with a null object, our game will crash.

Once we’re sure that our GameObject isn’t null, we do the next thing: grabbing the Script Component attached to it.

Once we initialized our gameController variable, we can directly call our public function AddScore()updating our total score for destroying the enemy ship. Fantastic!

Now whenever a ship blows up, we update our points!

Ending the Game

We made it to the end! We have:

  • Our player ship
  • Enemy asteroids being spawned
  • Destruction effects
  • Sound effects
  • UI

There’s only one thing left before this tutorial is finished and that’s making the game finish state.

To do this, we created:

  • Two more GUIText labels: the game over message and the restart instructions
  • A Boolean to tell us if the game is over or not

First, looking at the GameController script:

using UnityEngine;
using System.Collections; public class GameController : MonoBehaviour
{ public GameObject hazard; public Vector3 spawnValues; public int hazardCount; public float spawnWait; public float startWait; public float waveWait; public GUIText scoreText; public GUIText restartText; public GUIText gameOverText; private bool gameOver; private bool restart; private int score; void Start () { gameOver = false; restart = false; restartText.text = ""; gameOverText.text = ""; score = 0; UpdateScore (); StartCoroutine (SpawnWaves ()); } void Update () { if (restart) { if (Input.GetKeyDown (KeyCode.R)) { SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex); } } } IEnumerator SpawnWaves () { yield return new WaitForSeconds (startWait); while (true) { for (int i = 0; i &lt; hazardCount; i++) { Vector3 spawnPosition = new Vector3 (Random.Range (-spawnValues.x, spawnValues.x), spawnValues.y, spawnValues.z); Quaternion spawnRotation = Quaternion.identity; Instantiate (hazard, spawnPosition, spawnRotation); yield return new WaitForSeconds (spawnWait); } yield return new WaitForSeconds (waveWait); if (gameOver) { restartText.text = "Press 'R' for Restart"; restart = true; break; } } } public void AddScore (int newScoreValue) { score += newScoreValue; UpdateScore (); } void UpdateScore () { scoreText.text = "Score: " + score; } public void GameOver () { gameOverText.text = "Game Over!"; gameOver = true; }
}

Creating Our New Variables

The first thing you can see is that we created our GUIText objects and Booleans to allow us to check if the game is over or not. We initialize these new variables in Start().

Creating the Restart Options

To restart a game, we have to capture a button press input. To do this, we have to put all of our user input code inside the Update function. That’s the only function that runs continuously allowing us to make these checks.

In our Update() function, we check to see if we can reset and if we are, if the user presses R, we would reload the whole application.

I’m sure we’ll see more about the SceneManager in the future, but as you recall, we work in scenes for our games in Unity. What this means is that in the future we might have games with multiple scenes that we can switch between.

In the tutorial, we use Application, but that’s the depreciated version. We now use the SceneManager.

Creating the Game Over State

Just like when we created AddScore(). Our GameController doesn’t know when the game is over. We have to have an external source tell this to us. That’s why we made our GameOver() public.

Inside the function, we set the GameOver text to say game over and set our gameOver flag to be true. But that doesn’t immediately end our game yet!

If you notice in the spawn enemy code, we don’t ever stop creating new enemies, even when it’s game over! We fix that with this:

if (gameOver)
{ restartText.text = "Press 'R' for Restart"; restart = true; break;
}

What this does is that we add our restart instruction and enter into the restart state, which means we can start detecting when the user presses R in Update().

We also break out of while loop so we won’t continue spawning asteroids forever.

The Next Part…

So great, we added GameOver to our GameController script, but where do we call it?

Inside the DestroyByContact script! Specifically, when our ship blows up.

using UnityEngine;
using System.Collections; public class DestroyByContact : MonoBehaviour
{ public GameObject explosion; public GameObject playerExplosion; public int scoreValue; private GameController gameController; void Start () { GameObject gameControllerObject = GameObject.FindWithTag ("GameController"); if (gameControllerObject != null) { gameController = gameControllerObject.GetComponent<GameController>(); } if (gameController == null) { Debug.Log ("Cannot find 'GameController' script"); } } void OnTriggerEnter(Collider other) { if (other.tag == "Boundary") { return; } Instantiate(explosion, transform.position, transform.rotation); if (other.tag == "Player") { Instantiate(playerExplosion, other.transform.position, other.transform.rotation); gameController.GameOver (); } gameController.AddScore (scoreValue); Destroy(other.gameObject); Destroy(gameObject); }
}

We already have the gameController script component so all we need to do is call GameOver!

And there we go! Now we can have a game over state and restart to the beginning!

Conclusion

Phew, this was a long post for the day! I’m seriously re-considering writing everything. It’s starting to take longer than the actual learning, re-watching, and then implementing!

On the bright side, however, I have definitely learned more than I normally would since I have to understand what I’m blogging about!

Also, I think things will be a lot easier once I start working on my own projects and deviate from these long “what I learned from these tutorials posts.”

Anyways, we started 3 days ago with close to nothing in knowledge and we’re now one step closer to making a VR game:

  • Setting up an environment
  • Creating the player
  • Spawning enemies
  • Destroying/Creating objects
  • Creating UI
  • Detecting user button presses
  • Accessing other objects from your script
  • And, I’m sure, much more!

I’m going to skip the last few modules and enhancing the game and go straight to the next tutorial.

I think this will be the last tutorial before I start messing around with creating a simple game. Until then!

Original Link

Day 3: Going Through the Unity Space Shooter Tutorial II

Finally, back to the original coding on Day 3. I left off creating the background, the player object, and the ability to shoot! Some of the core topics that were talked about in today’s tutorial include:

  • Creating a boundary box to delete objects
  • Creating enemies/obstacles

You can catch up on Day 2 here.

Let’s get started!

Boundaries, Hazards, and Enemies

Boundary

Leaving off from last time, we created bullets that would fire off from the ship, but if you were to look at the game hierarchy pane, you would see a lot of the bullet objects would just remain there.

The more you shoot, the more you’ll have. So what gives?

If you were to pause the game and go to the Scene tab, you’ll see that the bullets actually keep going, never disappearing.

Is this a problem? You bet it is! The more GameObject we instantiate, the more Unity has to calculate, which means our performance will suffer!

What was done to solve this problem was to create a giant cube that covers the scene. I attached a script to this cube and I added:

using UnityEngine;
using System.Collections; public class DestroyByBoundary : MonoBehaviour
{ void OnTriggerExit(Collider other) { Destroy(other.gameObject); }
}

What we’re relying on for this is the OnTriggerExit() function. As you might suspect, by the name, the function gets called when a collider leaves the object it’s colliding with. When we trigger the code, we would Destroy() the object, which in this case is the laser. Afterwards, we attach this script, you’ll see that the lasers disappear.

Creating Hazards

In the next video, we learn how to create asteroids that will fly down at the player. We:

  • Used the provided asteroid model to create the GameObject
  • Attached a capsule collider component to it
  • Adjusted the collider to much the asteroid shape as much as possible
  • Added a Rigidbody component and made it a trigger
  • Added the provided RandomRotator script to the asteroid
using UnityEngine;
using System.Collections; public class RandomRotator : MonoBehaviour
{ public float tumble; void Start () { GetComponent<RigidBody>().angularVelocity = Random.insideUnitSphere * tumble; }
}

AngularVelocity

AngularVelocity is the speed of how fast the object rotates. In the video, we’re using AngularVelocity to create a random rotation of the object. We do this by using Unity’s random function. Specifically, we chose to use insideUnitSphere to create a random position Vector that’s inside the gameobject and multiply it by the speed we want the asteroid to roll.

Destroy Asteroids When Shot

Now that we have our first “enemy”, we want to be able to shoot and get rid of it! When our laser touches the asteroid, nothing would happen, and that’s because both the asteroids are a trigger so they don’t collide with each other. What we have to do at this point is add a Script to our object that has logic that deals with what happens when it collides with another object. We attach this script to our asteroid class:

using UnityEngine;
using System.Collections; public class DestroyByContact : MonoBehaviour
{ void OnTriggerEnter(Collider other) { if (other.tag == "Boundary") { return; } Destroy(other.gameObject); Destroy(gameObject); }
}

We’re already familiar with this code. When our asteroid runs into something, it’ll destroy both the other object and itself.

An interesting thing here is that we check to see if the object we run into is the Boundary box that we created and if it is, we stop our code. It’s important that we check for the boundary, because if we don’t, the first thing that’ll happen when the game loads is that the asteroid will collide with the boundary and they’ll both be destroyed. To solve this, the video created a tag called Boundary and attached it to the Boundary GameObject. With this, whenever the asteroid collides with the Boundary GameObject, we’ll end the function call and nothing will happen.

Explosions

In the next video, we added some more special effects, specifically what happens when the asteroid gets hit.

Opening up the DestroyByContact script that was created previously, the video made some changes:

using UnityEngine;
using System.Collections; public class DestroyByContact : MonoBehaviour
{ public GameObject explosion; public GameObject playerExplosion; void OnTriggerEnter(Collider other) { if (other.tag == "Boundary") { return; } Instantiate(explosion, transform.position, transform.rotation); if (other.tag == "Player") { Instantiate(playerExplosion, other.transform.position, other.transform.rotation); } Destroy(other.gameObject); Destroy(gameObject); }
}

In the code, 2 GameObjects were made public variables. These are the explosion effects the tutorial provided: one is the asteroid explosion and the other is the player explosion. Similar to how we create a new bullet GameObject, we Instantiate() an explosion GameObject for the asteroid and if the asteroid collides with the player object (we set a tag to it), we would also make the player blow up. Once we added the code above to the script, I went back to the editor and attached my explosion effects to my script component.

Re-Using Code

It’s also interesting to take note that in this video, we re-attached our Mover script to our asteroid and set the speed to -5. As a result, instead of the Asteroid moving up like our bullet, it goes the opposite direction: down. What’s important about this is that Scripts are re-usable components themselves, we don’t have to create a Script for every GameObject, if an existing script already does something that’s needed, we can just re-use the same script with different values!

Game Controller

In the next video, we worked on creating a Game Controller. The game controller is responsible for controlling the state of the game, which in this case is generating asteroids.

For the GameController script, we attached it to a new Empty GameObject. We’ll call this game object GameController and we create our GameController script for it.

using UnityEngine;
using System.Collections; public class GameController : MonoBehaviour
{ public GameObject hazard; public Vector3 spawnValues; void Start () { SpawnWaves (); } void SpawnWaves () { Vector3 spawnPosition = new Vector3 (Random.Range (-spawnValues.x, spawnValues.x), spawnValues.y, spawnValues.z); Quaternion spawnRotation = Quaternion.identity; Instantiate (hazard, spawnPosition, spawnRotation); }
}

Let’s go through this code for a bit. We created some public variables:

public GameObject hazard;
public Vector3 spawnValues;

hazard is the asteroid and spawnValues are the range of locations where we would instantiate our Asteroids.

We create a new function SpawnWaves() and call it from the Start() function. We’ll see why the video does this later, but reading the code in the function:

void SpawnWaves () { Vector3 spawnPosition = new Vector3 (Random.Range (-spawnValues.x, spawnValues.x), spawnValues.y, spawnValues.z); Quaternion spawnRotation = Quaternion.identity; Instantiate (hazard, spawnPosition, spawnRotation); }

We create a Vector3 that represents the point we want to create an Asteroid. We use Random.Range() to create a randomize value between the 2 values we give it. We don’t want to change the Y or Z value of our GameObject, so we only randomize our starting X location (left and right)

Quarternion.identity just means no rotation. What this means for our code is that we’re creating an Asteroid at a random position and without rotation. The reason why we don’t have a rotation is that that would interfere with the rotation that we already added for our Asteroid script.

Spawning Waves

Currently, the code only generates one asteroid. It’ll be a pretty boring game if the player only has to avoid one asteroid to win. So next up, in this video, we create waves of asteroids that will come down for the player to dodge. To do this, we could do something like copy and paste more prefabs into Start() in the GameControllerscript, however, not only does this make me cry a bit on the inside, it’ll also make it harder to make changes in the future. Here’s what we ended up making:

using UnityEngine;
using System.Collections; public class GameController : MonoBehaviour
{ public GameObject hazard; public Vector3 spawnValues; public int hazardCount; public float spawnWait; public float startWait; public float waveWait; void Start () { StartCoroutine (SpawnWaves ()); } IEnumerator SpawnWaves () { yield return new WaitForSeconds (startWait); while (true) { for (int i = 0; i &amp;lt; hazardCount; i++) { Vector3 spawnPosition = new Vector3 (Random.Range (-spawnValues.x, spawnValues.x), spawnValues.y, spawnValues.z); Quaternion spawnRotation = Quaternion.identity; Instantiate (hazard, spawnPosition, spawnRotation); yield return new WaitForSeconds (spawnWait); } yield return new WaitForSeconds (waveWait); } }
}

Coroutine

Coroutines are functions that run your code, return and yield control back to rest of Unity, and then resumes again on the starting back once the condition for waiting has been met.

We can see more about this in the code above. That we have:

yield return new WaitForSeconds (spawnWait);

This means that we will wait whatever seconds to spawn a new enemy. However, if we were to do something like:

yield return null;

The code will execute immediately after the next frame. Does that sound kind of familiar? That’s because they act very similarly to how Update() works! From my understanding, you can almost use coroutines to replace Update() if you wanted to, but the main benefit of using them is to prevent cramming code inside Update(). If there is some code logic that we want to use only once a while, we can use coroutines instead to avoid unnecessary code from running.

Another thing to note, coroutines run outside the normal flow of your code. If you were to put something like this:

void Start() { StartCoroutine(test()); print("end start"); }
IEnumerator test() { for (int i = 0; i < 3; i++) { print("in for loop " + i); yield return new WaitForSeconds(1); } } 

Our console will print this:

In for loop 0
End start
In for loop 1
In for loop 2

And if we were to do something like this:

void Update() { StartCoroutine(test()); } IEnumerator test() { for (int i = 0; i < 3; i++) { print("in for loop " + i); yield return new WaitForSeconds(1); } }

We would have something like this:

In for loop 0
In for loop 0
In for loop 0
In for loop 0
In for loop 0
In for loop 0
In for loop 0
In for loop 0
In for loop 0

… and so on for 1 second and then from there, we’ll have a mix of:

In for loop 0 and in for loop 1

This happens because we call the coroutine multiple times, specifically, once per frame and then after a second, the function will start printing when i = 1 while Update() is still making new coroutine calls that print when i = 0. Besides the coroutine, the rest of the code is pretty straightforward, we add a couple public variables for waiting time.

Cleaning Up Explosions

Moving on from creating our waves, whenever our ship destroys an asteroid, we create an explosion. However, that explosion never disappears. This is because these explosions never leave our boundary. What we do is attach the DestroyByTime script to the explosion. The script will destroy the explosion GameObject after a set amount of time. The code to do this is pretty straightforward.

using UnityEngine;
using System.Collections; public class DestroyByTime : MonoBehaviour
{ public float lifetime; void Start () { Destroy (gameObject, lifetime); }
}

Conclusion

Phew and that’s it for Day 3; today we learned:

  • How to use boundaries to clean up some of our GameObject leaving it
  • How to create, move, and destroy enemy waves
  • How to use corroutines, which in some ways are similar to Update()

I’m going to call it a day! In the next part of the series, we’ll be looking into creating UIs and audio to finish the space shooter game.

Original Link

Day 1 of 100 Days of VR: Going Through the Unity Ball Tutorial

The first step to learning VR is learning how to use one of the game engines that support them. In the current market, we have two options: Unity and Unreal Engine.

I was told that Unity was more beginners friendly, so I decided to pick that up. I installed the latest version of Unity along with Visual Studio 2017 Community Edition that came bundled along with Unity.

Great! Now that we have our toolkit installed, what’s next?

The first thing I did was that I started to go through the Unity’s Roll-a-ball tutorial.

Here’s a summary of what I learned in the tutorial:

Setting Up the Game

Scene

The scene tab gives you a 3D view of the world that you can move around in. In this tab, you can directly drag and drop the positioning of the objects that you inserted into Unity.

Hierarchy

The hierarchy is used to display the Unity objects that you create and added into your scene.

Positioning

Since we’re working in a 3D environment, Unity has a  3D coordinate system: (x,y,z), if we were to select the sphere that was created in the tutorial, if you look at the inspector on the right, you can see that we can change some of the transform properties such as position, rotation, and scale.

It appears that if we imagine something lying flat on a surface:

  • X = Horizontal movement
  • Y = Height movement
  • Z = Vertical movement

Material

If you want to change the colors of your object, we have to create a material for our Mesh Renderer component that you can see from the picture above.

If you were to select the sphere,  in the inspector under the Mesh Renderer component, you can select a material to apply on our object.

In the tutorial, we created a new material in the project pane and made its color red. Then we applied it to our sphere to get the red sphere that you see above.

Roll a Ball

After the first video showed us how to setup the environment and use some of the tools available for the roll a ball tutorial, the next thing we learned is how to actually move the ball.

Rigid Body

The first thing we need to do is to add a RigidBody component to our ball object. What this does is that it makes the ball take part in Unity’s physics engine so that it’ll be affected by things such as gravity and when it collides with other properties

Looking at the code for this part of the video, we can learn a lot of information:

using UnityEngine;
using System.Collections; public class PlayerController : MonoBehaviour { public float speed; private Rigidbody rb; void Start () { rb = GetComponent<Rigidbody>(); } void FixedUpdate () { float moveHorizontal = Input.GetAxis ("Horizontal"); float moveVertical = Input.GetAxis ("Vertical"); Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical); rb.AddForce (movement * speed); }
}

The way scripts in Unity works is that they are attached to unity game objects that are in and we can grab any component information that’s attached to the game.

A great example is given in the example code you see above when we’re using the rigid component:

rb = GetComponent<Rigidbody>();

for when we want to access information from our RigidBody component.

Public Variables

Before looking at the rest of the function, I want to point out that if you were to make a variable public in Unity script, that means you can set their value outside of the code under that script’s component in the Unity Editor.

Using the component system we can easily change the text and settings on the fly when you’re playtesting your game!

Learning About Start and Update

So we have 2 functions that are being used in the code Start() and FixedUpdate(). Both of these are functions we’re inheriting from the MonoBehaviour object that controls how the game works.

Here are the more common methods I found

  • Start() – this is run only once when your GameObject gets created in Unity. It’s used for initializing your variables and creating the state of your GameObject when it is first created

The next parts are the updates method. Depending on what you might have worked on, the update function might take some time to wrap your mind around.

An update method is called once every time Unity renders a frame. You might have heard the term, frames per second? If we have 60 frames per second, which means our update function gets called 60 times every second.

So what’s a frame? If you have some knowledge of animation, you’ll know that an image is made of multiple images that are being replaced one after another. An image to illustrate my point is this gif:

This gif is actually made up of multiple images from a sprite map that are being looped through per scene.

From my basic understanding, this is extremely important for VR; if we want to avoid causing motion sickness, we have to make a game that achieves 90 fps.

But anyways, that’s a problem for the future, I don’t even know how to work with Unity! Going back to the update methods, we have:

  • Update() – this methods Is called every frame that the game runs in. I did my own investigation by inserting the code snippet Debug.Log("update: " + Time.deltaTime); to print out how much time has passed since the last update call and found that the time per frame isn’t consistent.
  • FixedUpdate() – Similar to update, but this code runs at fixed times, which is good for physics calculations which we see in the code above. Printing the deltaTime, it appears that the FixedUpdate method is called every 0.02 seconds. Forcing 50 fps.

Inputs and Vectors

Finally the last part of the code is the movement code. Unity comes with a handy set of api’s that make it easy to detect when the user is hitting on the keyboard.

We used Input.GetAxis, to get the movement when we hit the arrow keys to get the direction that is being hit and store those values in a vector.

If you remember a bit of physics from High school, you might remember that a vector is just a direction. So in our code, when we say we create a vector (x,y, and z location) it means that we’re creating a force that’s going in that x,y, and z direction that we specified…

…which is exactly what you see here:

rb.AddForce (movement * speed);

Moving the Camera

So that was a lot to learn about Unity already and we’re only in 3rd video of the series! So continuing on, after learning about moving the ball around, we now see that the game tab that we run doesn’t really move or follow the ball when we roll it around.

This was resolved by using a camera object, the view that you’re seeing in the game tab is from the camera.

So a nifty way to follow the ball is to make the ball a child of our ball, by dragging our Main Camera object on the hierarchy into the Sphere we made.

By doing this, we made our camera relative to our ball’s position. Unfortunately, when we move the ball around the camera follows.

To fix this we added the provided CameraController script to our camera:

using UnityEngine;
using System.Collections; public class CameraController : MonoBehaviour { public GameObject player; private Vector3 offset; void Start () { offset = transform.position - player.transform.position; } void LateUpdate () { transform.position = player.transform.position + offset; }
}

What was done was that we created the distance from the camera to the player in the Start() function and update our camera so that it is always that distance away in LateUpdate()

LateUpdate() acts the same way as the other update() however the difference is that it is the method that’s called when rendering a frame.

Setting Up the Play Area

There wasn’t too much in this section, just learning how to set up our environment by moving objects around.

Creating Collectable Objects

There also wasn’t too much here, we created collectible items that the player can roll the ball over, we created a script to make the ball rotate which we do by grabbing our GameObject’s transform component and modifying the rotation.

transform.Rotate (new Vector3 (15, 30, 45) * Time.deltaTime);

Note: Time.deltaTime is the time that elapsed since the last time we called Update()

Prefabs

However, one interesting thing to note is that after we make the object we want, we can drag it from the hierarchy into the project pane on the bottom to create a prefab.

A prefab is a template or a clone of the object that you made. With this, we can create multiple instances of that same object.

The benefits of using prefab is that if you ever decide to change the components of the GameObject, instead of changing them one by one, just by changing the prefab, you can change all instance in one go.

Collecting the Pick-Up Objects

Now things are finally getting more interesting. In this section, we learned a variety of subjects involving colliders and how objects in the game interact with other objects.

From the previous video, we created these cube objects (which have their own rigid body) that we can collect, however, if we were to roll our spheres into the cube, we would bounce back.

This problem will be addressed in this video.

Collider Intro

Whenever a GameObject that has a Collider component touches another GameObject with a Collider component, we call that a collision.

Unity provides many collider components that you can attach to your GameObject. These colliders all come in different shapes and sizes. The goal is to pick a collider that matches the shape of your GameObject.

When we were to use a different collider- let’s say a square, for example- the collision won’t actually occur at when something else touches the material (the red sphere); instead, the collision will start when they touch the edge of the square.

From my understanding, we can use a mesh collider to try and hug our object as close as possible, however the more fine grain control we need on our GameObject for collision, the more calculations will need to be done, resulting in worse performance.

The TLDR of this: only use complex colliders if you absolutely have to, otherwise we should try and use more basic shape colliders.

Using Colliders

Now back to what was taught, how do we use them?

In our PlayerController.cs we just inherit the OnTriggerEnter() function!

void OnTriggerEnter(Collider other) { if (other.gameObject.CompareTag ("Pick Up")) { other.gameObject.SetActive (false); }
}

This method gets called every time our sphere bumps into another collider, we get passed a Collider object that has access to the GameObject that we collided against.

Notice how we have something called Tag? That’s how we can identify what we hit in Unity. In each GameObject in the hierarchy, you can create and attach a Tag to them. As you can tell from the code above, we can use Tags to help identify which GameObject we collided against.

After we found the object with the tag we want, we can do whatever we want with it!

Performance Optimizations

While probably not important to know this early on, something that was briefly mentioned is that there are two types of colliders, static and dynamic colliders.

Static colliders, as you expect, are objects that don’t move and dynamic are objects that do move. The important thing to remember here is that static colliders are cached by Unity so that we can get performance benefits, however, if you plan to change and move a static game object, this will cause Unity to constantly cache the object, which now becomes a performance problem!

To make a GameObject be a dynamic collider is to make it impacted by Unity’s physics engine, ie adding a RigidBody component to it.

Dynamic Collider = RigidBody Component + Collider Component

Static Collider = Collider Component

Displaying the Score and Text

The game is “technically” done, but the user will never know it. In the last video of the series, we learn a bit about using Unity’s UI system.

UI

You can create a UI element like text the same way you create a GameObject in Unity by right-clicking in the hierarchy pane, however, one big difference is that all UI elements are made children of a Canvas GameObject.

And another thing is that the Canvas is actually directly on the screen when you’re in the Game tab and is not part of the Scene Tab… well actually it is, but it’s on a completely different scale!

The canvas covers the whole Game tab screen and for our UI GameObject, instead of Transform, we have something called Rect Transform.

In a way it’s very similar to a normal GameObject where you can still drag the object around, however, a nifty trick that helps position the text is to click on the square button on the top left corner of our Rect Transform component and you’ll get this:

If you hold alt and click one of the squares, your text will be automatically moved into one of the 9 spaces! Nifty!

How to Use Our UI Components

Looking at the code that was created in the video:

using UnityEngine;
using UnityEngine.UI;
using System.Collections; public class PlayerController : MonoBehaviour { public float speed; public Text countText; public Text winText; private Rigidbody rb; private int count; void Start () { rb = GetComponent<Rigidbody>(); count = 0; SetCountText (); winText.text = ""; } void FixedUpdate () { float moveHorizontal = Input.GetAxis ("Horizontal"); float moveVertical = Input.GetAxis ("Vertical"); Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical); rb.AddForce (movement * speed); } void OnTriggerEnter(Collider other) { if (other.gameObject.CompareTag ( "Pick Up")) { other.gameObject.SetActive (false); count = count + 1; SetCountText (); } } void SetCountText () { countText.text = "Count: " + count.ToString (); if (count >= 12) { winText.text = "You Win!"; } }
}

 We created a public variable of type Text, you can assign the value by dragging and dropping the Text object in our hierarchy into the variable slot in the script’s component.

One very important thing to remember when you’re using UI gameobjects is to include:

using UnityEngine.UI;

Otherwise, Unity won’t know any of the UI objects that you’re trying to reference.

The only new important part is the function SetCountText() which is where we see how we interact with a UI object:

void SetCountText ()
{ countText.text = "Count: " + count.ToString (); if (count >= 12) { winText.text = "You Win!"; }
}

It looks pretty straightforward, right? You get the UI object, and you set its text to be whatever you want it to be. Easy!

Conclusion

Phew, doing this write-up summary might actually be harder than the actual learning of Unity… and it’s only day 1!

But that’s okay. We’re just starting to learn about Unity, I’m sure as we progress along we’ll start seeing things like colliders, rigid bodies, and UI again and I don’t have to do this again. I really hope so. I really do…

Anyways, that’s the end of day 1! I hope you learned something valuable from this!

100 Days of VR | Day 2

Original Link

The next phase of China’s offline VR boom is here: virtual reality cinemas

A virtual reality experience room at the 74th Venice Film Festival. Photo credit: Eddie Lou.

In a small cafe near a street of embassies in Beijing, some of the world’s best virtual reality films are on display. As I sit down to drink tea, a young woman walks in and asks for Allumette, a 20-minute animation that premiered at the Tribeca Film Festival in 2016.

“All the VR cinemas [in China] are fairly new,” says Cedric Garcia, head of content at Yue Cheng Technology, which opened two VR cinemas in Beijing this year, including the cafe where we’re chatting now.

That’s because most virtual reality film studios are overseas, he explains. There used to not be enough content made in China. But that’s changed.

The virtual reality industry is experiencing a chicken-and-egg problem.

The global VR industry – projected to hit US$7 billion in revenue this year – is experiencing a chicken-and-egg problem, where bottlenecks in hardware and content, along with high costs, feed viciously into each other. Because there aren’t high quality yet affordable headsets for consumers, VR content makers are struggling to sell their work. Likewise, the dearth of engaging content makes it difficult for hardware developers to convince consumers to buy their own high-end headsets.

Experts see offline venues such as virtual reality arcades, where users can try games without buying their own equipment, as a way to fill this gap in the short term. In China, there are an estimated 12,000 brick-and-mortar VR experience centers – though generating profit continues to be a struggle.

“Offline experience centers have become an important way to educate the VR market,” explains Men Yuxiao, an analyst at Chinese research firm iResearch. “They can accelerate the development of the entire VR industry.”

Yue Cheng Technology’s tiny cafe cinema. Photo credit: Tech in Asia.

In particular, VR cinemas constitute a new channel for filmmakers, which target a different subset of virtual reality enthusiasts from gamers. “The two genres have very different user needs,” emphasizes Lei Zhengmeng, CEO and co-founder of Pinta Studios, a Beijing-based VR film studio.

“Gamers want to play, whereas moviegoers care more about the storytelling experience,” he adds.

Testing the waters

Though virtual reality cinemas are still in their early stages, there’s already a lot of diversity in layout and design. Yue Cheng Technology’s cafe is a tiny one-seater, where anyone who buys a drink can try its VR set for free. Its second cinema, however, which sits inside a big-box electronics retailer, charges US$5 to US$12 for a day pass, depending on the time and day of the week. Every week, the company curates four to five pieces.

In Amsterdam, however, The VR Cinema seats around 20 people and comes with a classy bar and lounge. The Dutch venue is one of the world’s first VR movie theaters and actually inspired Yue Cheng Technology’s CEO Gu Bin to pivot from public relations consulting to the VR industry in January. Here, tickets are charged per half-hour of viewing at US$10.50.

Different still is X-Cube in Shanghai, which opened just last month. The VR cinema has seven seats and is housed inside a normal movie theater. It charges moviegoers US$3 to US$4.50 per film.

A common refrain, however, is that theaters are still trying to figure out the best pricing models, underlining the importance of outside investment. In addition to its cinema business, for instance, Yue Cheng Technology has received an undisclosed amount of funding from ad agency Digitop. The company is also working on its own content, including a social virtual reality application.

“Virtual reality cinemas are still in the beginning stages,” Lei tells Tech in Asia. “At the moment, few have actually launched. They’re also mostly concentrated in [large] cities, and they need time to figure out how to generate profit.”

Lack of good content also continues to be a problem. Yue Cheng Technology has over 100 licensed pieces of content, but not all of them are top notch. One of the 360 degree videos I tried was a grainy video feed of panda bears in Chengdu – definitely not comparable to state-of-the-art work like Dear Angelica.

At the same time, cinemas can’t afford to play the same movies every week – which might happen if they only stick to the best films. And while the concept of a VR cinema is enticing, it takes compelling content to bring people back for second and third viewings.

Finally, advancements in hardware still need to be made. Even with a HTC Vive headset on, the resolution of Invasion, a short animated film about aliens, left much to be desired.

Photo credit: X-Cube.

Still, Garcia believes that the industry is changing quickly. Oculus Rift, for instance, plans to launch an untethered headset next year that can track your position without the need for external cameras.

Virtual reality film is also progressing – even “faster than games because of all the film festivals,” he explains, naming Tribeca and Sundance. “Most of these festivals have a VR section.” Representation from China is also on the rise. Pinta Studios and Sandman VR, for instance, were two of three Chinese studios that participated in the Venice Film Festival’s lineup of 22 VR films this year.

See: This startup is shooting for a Pixar moment in China’s fragmented but huge VR space

For now, filmmakers like Lou believe that VR cinemas can solve short-term pain-points like the fragmentation of movies across different platforms, such as Oculus Rift and HTC Vive. At least at a VR movie theater, all viewers have to do is sit down and strap on a headset (and spin around in 360 degree chairs). It’s also a way to make money, though Lou believes that in the long term, intellectual property (IP) and licensing will be another important monetization strategy.

“It’s like the early days of movies and TV shows – everyone first experienced them at offline venues,” he says. Virtual reality films will follow a similar trajectory – they just need more time.

Converted currency from Chinese yuan and Euro. Rate: US$1 = RMB 6.62 = EUR 0.85.

Original Link

Nordic accelerator launches in Shanghai, joining wave of expat-preneurs in China

Photo credit: Oliver Cole / Unsplash.

In a WeWork office overlooking central Shanghai, a cohort of Scandinavian entrepreneurs pitch to a throng of investors, startup folk, and fellow northern Europe expats. It’s the debut of nHack, the first accelerator in China designed exclusively for Nordic startups interested in the country’s enormous market.

The goal is to pair great technology and design from Finland, Sweden, Norway, and Denmark with Chinese investors, business partners, customers, and the country’s hardware supply chain.

You need to be in China to succeed in China.

“From experience, you need to be in China to succeed in China,” Jon Stø, partner of nHack, tells Tech in Asia. “I think very few companies succeed in China by sitting in their own country and trying to remote control [from afar].”

The accelerator, which formally launched last Wednesday, will invest US$25,000 to US$100,000 into accepted startups in exchange for seven to 10 percent equity. Follow-on funding after the program is also an option, depending on the startup’s performance. The program will run for three to five months and plans to organize a demo day for its startups on December 6 and 7.

So far, nHack is only accepting Nordic startups. The current batch includes hardware, virtual reality, agriculture tech, and gaming companies.

When I ask Stø about China’s notoriously cutthroat market – especially for hardware companies – he remains optimistic.

“Of course I think there are challenges in a foreign market, especially in a market where you don’t speak the language and there are cultural differences,” he says. “At the same time, I think, if you have a good product […] you will get business in China.”

China-bound

The past few years have seen a slew of overseas organizations, such as Chinaccelerator, StartupEast, and Startupbootcamp, enter China in hopes of helping entrepreneurs from their respective countries succeed in Asia’s largest market. At the same time, local governments and organizations in China have been opening their doors to foreign tech talent.

The Nordic region, whose combined population roughly equals Shanghai, has seen an increasing amount of cross-border activity with China. Slush, a volunteer-run startup conference born out of Helsinki, Finland, launched its first conference in China last year. 2016 also saw Tencent’s US$8.6 billion acquisition of Finnish gaming company Supercell, maker of Clash of Clans.

The Chinese tech giant was also rumored to have approached Swedish unicorn Spotify for acquisition earlier this year before being rebuffed, according to TechCrunch.

We want to bring Nordic companies first to Shanghai, then to China, and then to Asia.

“We want to bring Nordic companies first to Shanghai, then to China, and then to Asia,” says Stø.

NHack plans to open a series of vertical-focused accelerators in China, including Beijing and Hangzhou. nHack’s Shenzhen accelerator, which is slated to launch next March, will focus on smart hardware and IoT. Stø says the accelerator is aiming to bring in 30 startups each quarter.

In addition to mentorship, the accelerator will help companies connect with the right partners, whether they’re looking to crowdfund, find investors, or optimize their production process. NHack is also mainly focusing on companies that have raised funding before. In fact, most of its first cohort has a product, such as underwater drone company Blueye Robotics – which has already nabbed funding from local investors in China.

“It’s not so interesting from zero to one,” says Chris Rynning, partner and co-founder at nHack, who has lived in China for 20 years. “But once you have a paying customer, you go from one to 100 in value creation.” The goal of nHack, he explains, is not to focus on pre-seed or seed stage startups, but on companies that are ready to find customers, scale their business, or iterate on the next generation of their product through rapid prototyping.

Currently, nHack is partnering with Innovation Norway, a commercial arm of the Norwegian government, and Danish bank Danske Bank. Both entities are sponsoring and funding nHack’s accelerator program and investment fund. In China, the accelerator is a partner of WeWork and cooperates with Slush China.

Original Link

Alton Glass Debuts New Virtual Reality Film at American Black Film Festival (ABFF)

Alton Glass (Image: File)

Back in 2014, Alton Glass’ groundbreaking drama CRU made history at the American Black Film Festival (ABFF) Independent Film Awards when it took home a win in each category it was nominated for, including Best Film and Best Director. Three years later, the award-winning filmmaker returned to the annual festival to break new ground yet again with his virtual reality movie, A Little Love, which premiered Saturday, June 17, 2017.

The story of A Little Love explores the themes of love, family, and adventure, and stars actors Kellita Smith and Dorien Wilson. Watch Glass summarize the plot of the film in the video clip below:

Although the majority of the films screened at ABFF were shot via a standard camera, Glass’ VR film uses a combination of live-action and animation footage to leverage innovative VR technology in a way that completely immerses viewers in a 360° experience. In turn, this enables the audience to feel part of the narrative itself.

“Seeing the audience watch A Little Love for the first time was really awesome,” Glass says, in an interview with BLACK ENTERPRISE.“They were looking all around, laughing, and just fully transported into this experience. I think that this was something very different for them to experience at a film festival [and] at ABFF, and I think that they loved it.”

Along with providing the audience with an exclusive VR experience, the accompanying panel included a Q&A about the convergence of technology, media, and entertainment, which featured Glass as well as VR experts and television executives. During the talkback session, Glass opened up about being one of the few African American pioneers in the VR filmmaking landscape.  He also spoke about his decision to explore VR filmmaking, after directing a number of highly acclaimed movies like The Confidant (2010), starring Boris Kodjoe and David Banner; and The Mannsfield 12 (2007), which was acquired by BET.

“What inspired me to create a narrative in virtual reality like this, was being able to see someone like myself—for people of color or diversity—inside of an experience in virtual reality,” Glass says. “I’ve never seen anything where I felt like I was there—[in the film]— with people that looked like me. So, I felt compelled to make that piece.”

The celebrated director also explained why he chose to premiere his VR movie during the five-day festival. “It was important for me to debut this film at ABFF because one, ABFF has very supportive throughout my career,” Glass says, also adding that secondly, A Little Love is one of the first VR films to feature people of color.

Original Link