ALU

unity

Day 15 of 100 Days of VR: Survival Shooter – Adding Shooting, Hit, and More Walking Sound Effects!

On Day 15, we’re going to continue adding more sound effects to our existing game, specifically the

  • Shooting sound effect
  • Sound of the knight getting hit
  • Player walking

Today is going to be a relatively short day, but let’s get started!

Adding Enemy Hit Sounds

To start off, we’re going to do something like in Day 14. We’re going to create Audio Source Components in code and play the sound effects from there.

For shooting, we need to add our code to EnemyHealth:

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 100; public AudioClip[] HitSfxClips; public float HitSoundDelay = 0.5f; private Animator _animator; private AudioSource _audioSource; private float _hitTime; void Start() { _animator = GetComponent<Animator>(); _hitTime = 0f; SetupSound(); } void Update() { _hitTime += Time.deltaTime; } public void TakeDamage(float damage) { if (Health <= 0) { return; } Health -= damage; if (_hitTime > HitSoundDelay) { PlayRandomHit(); _hitTime = 0; } if (Health <= 0) { Death(); } } private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; } private void PlayRandomHit() { int index = Random.Range(0, HitSfxClips.Length); _audioSource.clip = HitSfxClips[index]; _audioSource.Play(); } private void Death() { _animator.SetTrigger("Death"); }
}

The flow of our new code is:

  1. We create our Audio Source component in SetupSound() called from Start()
  2. We don’t want to play the sound of the knight being hit every time we hit it, that’s why I set a _hitTime in Update() as a delay for the sound
  3. Whenever an enemy takes damage, we see if we’re still in a delay for our hit, if we’re not, we’ll play a random sound clip we added.

The code above should seem relatively familiar as we have seen it before in Day 14.

Once we have the code setup, the only thing left to do is to add the Audio clips that we want to use, which in this case is Male_Hurt_01Male_Hurt_04.

That’s about it. If we were to shoot the enemy now, they would make damage hits.

Player Shooting Sounds

The next sound effect that we want to add is the sound of our shooting. To do that, we’re going to make similar adjustments to the PlayerShootingController.

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; public AudioClip ShotSfxClips; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; private AudioSource _audioSource; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; SetupSound(); } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } else if (!Input.GetMouseButton(0)) { _audioSource.Stop(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); _audioSource.Play(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } if (health != null) { health.TakeDamage(1); } } } private void SetupSound() { _audioSource = gameObject.AddComponent<AudioSource>(); _audioSource.volume = 0.2f; _audioSource.clip = ShotSfxClips; }
}

The flow of the code is like the previous ones, however for our gun, I decided that I want to use the machine gun sound instead of individual pistol shooting.

  1. We still setup our Audio Component code in Start().
  2. The interesting part is that in Update(), we play our Audio in Shoot() and as long as we’re holding down the mouse button, we’ll continue playing the shooting sound and when we let go, we would stop the audio.

After we added our script, we attach Machine_Gunfire_01to the script component.

Player Walking Sound

Last but not least, we’re going to add the player walking sound in our PlayerController

using UnityEngine; public class PlayerController : MonoBehaviour { public float Speed = 3f; public AudioClip[] WalkingClips; public float WalkingDelay = 0.3f; private Vector3 _movement; private Rigidbody _playerRigidBody; private AudioSource _walkingAudioSource; private float _timer; private void Awake() { _playerRigidBody = GetComponent<Rigidbody>(); _timer = 0f; SetupSound(); } private void SetupSound() { _walkingAudioSource = gameObject.AddComponent<AudioSource>(); _walkingAudioSource.volume = 0.8f; } private void FixedUpdate() { _timer += Time.deltaTime; float horizontal = Input.GetAxisRaw("Horizontal"); float vertical = Input.GetAxisRaw("Vertical"); if (horizontal != 0f || vertical != 0f) { Move(horizontal, vertical); } } private void Move(float horizontal, float vertical) { if (_timer >= WalkingDelay) { PlayRandomFootstep(); _timer = 0f; } _movement = (vertical * transform.forward) + (horizontal* transform.right); _movement = _movement.normalized * Speed * Time.deltaTime; _playerRigidBody.MovePosition(transform.position + _movement); } private void PlayRandomFootstep() { int index = Random.Range(0, WalkingClips.Length); _walkingAudioSource.clip = WalkingClips[index]; _walkingAudioSource.Play(); }
}

Explanation

This code is like what we’ve seen before, but there were some changes made.

  1. As usual, we create the sound component in Start() and a walking sound delay.
  2. In Update(), we made some changes. We don’t want to play our walking sound whenever we can. We only want to play it when we’re walking. To do this, I added a check to see if we’re moving before playing our sound in Move().

Also, notice that the audio sound is 0.8 as opposed to our other sounds. We want our sound to be louder than the other players so we can tell the difference between the player walking and the enemy.

After writing the script, we don’t forget to add the sound clips. In this case, I just re-used our footsteps by using Footstep01Footstep04.

Conclusion

I’m going to call it quits for today for Day 15!

Today, we added more gameplay sound into the game so when we play, we have a more complete experience.

I’m concerned about what happens when we have more enemies and how that would affect the game, but that’ll be for a different day!

Original Link

Day 14 of 100 Days of VR: Survival Shooter – Finish Attacking the Enemy and Walking Sounds in Unity

We’re back on Day 14. I finally solved the pesky problem from Day 13 where the Knight refuses to get pushed back when we shoot at him.

Afterwards, I decided to get some sound effects to make the game a little livelier.

Without delay, let’s get started!

Adding Player Hit Effects – Part 2

As you might recall, we last ended up trying to push back the Knight when we shoot them by changing the Knight’s velocity, however the Knight continues to run forward.

The Problem

After a long investigation, it turns out that Brute running animation that I used naturally moves your character’s position forward.

The Solution

After finally searching for unity animation prevents movement, I found the answer on StackOverflow.

In the animator, disable Apply Root Motion and then we must apply the movement logic ourselves (which we already are).

Writing the Knockback Code

Once we have our Root Motion disabled. We’re relying on our code to move our knight.

The first thing we need to do is update our PlayerShootingController script to call the knockback code:

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); EnemyMovement enemyMovement = hit.collider.GetComponent<EnemyMovement>(); if (enemyMovement != null) { enemyMovement.KnockBack(); } if (health != null) { health.TakeDamage(1); } } }
}

The biggest change is that we get our EnemyMovement script and then call KnockBack() which we haven’t implemented yet.

Once we have this code in, we need to implement KnockBack() inside our EnemyMovement script. Here’s what it looks like:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ public float KnockBackForce = 1.1f; private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); } void Update () { if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); } else { _nav.enabled = false; } } public void KnockBack() { _nav.velocity = -transform.forward * KnockBackForce; }
}

I know this was a one-liner for KnockBack(), but there was a lot of work involved to get to this point.

Here’s how the code works:

  1. When our shooting code hits the enemy, we call KnockBack() which sets the velocity to be the direction behind the knight, making the illusion of being pushed back.
  2. This is only temporary as our Nav Mesh Agent will come back and move our Knight towards the player in the next Update()
  3. Here’s how KnockBackForce effects the velocity
    1. At 1, the knight stays in place when you shoot
    2. <1, the knight gets slowed down
    3. >1, the knight gets pushed back

Adding Sound Effects

Now that we finally solved the knockback problem, we moved on to the next thing.

At this point, playing the game seems dull. Do you know what could make things a little bit more interesting? Sound effects!

I went back to the Unity Asset Store to find sound effect assets specifically:

  1. Player shooting sound
  2. Player walking sound
  3. Player hit sound
  4. Enemy hit sound
  5. Enemy running sound
  6. Enemy attack sound

Randomly searching on Unity, I found the Actions SFX Vocal Kit which contains everything we need. Fantastic!

Once we have finished downloading and importing the SFX into our Unity project, we’ll start using them.

Adding Enemy Hit Sound Effects

Adding the Script

The first thing we’re going to do is that we need to add our Male_Hurtaudio clips to our Knight.

Normally, we need to add an Audio Source component for our Knight. However, before that, let’s step back and think: what sounds do our knight need to play?

  1. Hit sound
  2. Walking sound
  3. Attack sound

If we were to add an Audio Source component to the Knight Object and use that to play the sound, what will happen is that one sound will immediately be replaced by the other one. We don’t want that.

What we could do is create multiple AudioSources components and then manually attach them to our script, however that’s not very scalable if we ever decided that we needed more types of sounds.

Instead, I found this great way to add multiple audio sources on a single GameObject.

The idea is that instead of manually creating multiple components and then attaching them to a script component, why not create the component in code?

Here’s what I did:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ public float KnockBackForce = 1.1f; public AudioClip[] WalkingClips; public float WalkingDelay = 0.4f; private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; private AudioSource _walkingAudioSource; private float _time; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); SetupSound(); _time = 0f; } void Update () { _time += Time.deltaTime; if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); if (_time > WalkingDelay && _animator.GetCurrentAnimatorStateInfo(0).IsName("Run"))) { PlayRandomFootstep(); _time = 0f; } } else { _nav.enabled = false; } } public void KnockBack() { _nav.velocity = -transform.forward * KnockBackForce; } }

There’s a lot of code that was added in, but I tried to separate them as much as I can to easy to understand pieces.

Here’s the flow:

  1. In Start(), we instantiate our new private fields, specifically our new variables:
    1. _walkingAudioSource: our AudioSource for our steps
    2. _time: to track how long the enemy steps take
  2. We call SetupSound() from Start() and create a new instance of an AudioSource that will only appear when the game starts and we set the volume to 0.2f
  3. In Update(), we add logic to play the stepping sound whenever it has been 0.2 seconds and if we’re still in the running animation.
    1. Note: In GetCurrentAnimatorStateInfo(0), the 0 refers to index 0 layer, which I’m not really sure why, but that’s what people use. From there, we can check which state the knight is in.
  4. In PlayRandomFootstep(), we randomly choose the walking sound clips that we downloaded and play them.

Once we have all of this, we need to add the audio clips in.

Go to EnemyMovement script attached to the Knightand then under Walking Clips,change the size to 4. We can do this because Walking Clips is an array of clips.

Then, add in Footstep01-04 into each spot. Make sure that Walking Delay is set to 0.4 if it’s not already.

Run the game and you’ll see that the enemy makes running sounds now!

If you’re using a different animation, you might have to change the Walking Delay to match the animation, but on the high level, that’s what you must do!

Whenever the knight attacks us, the sound will stop and whenever the knight resumes running after us (with the help of some shooting knockback), the running sound will resume!

Conclusion

Today on Day 14, we found the problem with the knight knockback had something to do with the root animation we used.

After disabling it, we can start adding our knockback code without any problems.

With the knockback implemented, the next thing that we added was sound effects. We found some assets in the Unity store and then we added them to our enemy, where for the first time, we created a component via code.

My concern at this point is what happens when we start spawning a lot of knights? Will that create an unpleasant experience?

Either way, come back tomorrow for Day 15, where I decided I’m going to add the enemy hit sound and the player shooting sound.

Original Link

Day 13 of 100 Days of VR: Attacking Enemies, Health System, and Death Animation in Unity

Welcome back to day 13 of the 100 days of VR! Last time, we created enemy motions that used the Nav Mesh Agent to help us move our enemy Knight.

We added a trigger collider to help start the attack animations when the enemy got close to the player.

Finally, we added a mesh collider to the body of the knight so when it touches the player during its attack animation, we’ll be able to use the damage logic.

Today, we’re going to go on and implement the shooting logic for our player and to fix the annoying bug where the player would be perpetually moving after they come in contact with any other colliders.

Fixing the Drifting Problem

My first guess at what the problem is that something must be wrong with our Rigid Body component of the player.

If we recall, the Rigid Body is in charge Unity’s physics engine on our player.

According to the documentation for RigidBody, the moment that anything collides with our player, the physics engine will exert velocity on us.

At this point, we have 2 options:

  • Set our velocity to be 0 after any collision.
  • Make our drag value higher.

What is drag? I didn’t really understand it the first time we encountered it either, but after doing more research, specifically reading it here in Rigidbody2D.drag, drag is how long it takes for an object to slow down over friction. Specifically, the higher the faster it is for us, the faster for us to slow down.

I switched the drag value in the RigidBody from 0 to 5.

I’m not sure what the value represents, but before our velocity never decreased from friction because of our drag value, but after we added one in, we’ll start slowing down over time.

Adding the Enemy Shooting Back Into the Game

After solving the drag problem, we’re finally going back to the main portion of the game: shooting our enemy.

There will be 2 places that we’re going to have to add our code in: EnemyHealth and EnemyMovement.

EnemyHealth

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 10; private Animator _animator; void Start() { _animator = GetComponent<Animator>(); } public void TakeDamage(float damage) { if (Health <= 0) { return; } Health -= damage; if (Health <= 0) { Death(); } } private void Death() { _animator.SetTrigger("Death"); }
}

Here’s the new flow of the code we added:

  1. In Start(), we instantiate our Animator that we’ll use later to play the death animation.
  2. In TakeDamage() (which is called from the PlayerShootingController) when the enemy dies, we call Death()
  3. In Death(), we set death trigger to make the Knight play the death animation.

Next, we need to make a quick change to EnemyMovement to stop our Knight from moving when it dies.

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ private NavMeshAgent _nav; private Transform _player; private EnemyHealth _enemyHealth; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; _enemyHealth = GetComponent<EnemyHealth>(); } void Update () { if (_enemyHealth.Health > 0) { _nav.SetDestination(_player.position); } else { _nav.enabled = false; } }
}

Here’s the code flow:

  1. In Start(), we grab the EnemyHealth script so we can access the knights health.
  2. In Update() if the knight is dead, we disable the Nav Mesh Agent, otherwise it continues walking like normal.

Now when we play the game, the knight enters the death state when defeated, like so:

Improving Shooting Mechanics

At this point, you might notice a problem….

…Okay, I know there are many problems, but there are two specific problems I’m referring to.

  1. The knight dies almost instantly whenever we shoot.
  2. When we shoot, we don’t really have anything happen to the enemy to make us feel we even shot them.

So we’re going to fix these problems.

Adding a Shooting Delay

Right now, we always shoot a raycast at the enemy knight whenever Update() detects that our mouse is held down.

So, let’s add a delay to our Player Shooting Controller script.

using UnityEngine; public class PlayerShootingController : MonoBehaviour
{ public float Range = 100; public float ShootingDelay = 0.1f; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; private float _timer; void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); _timer = 0; } void Update () { _timer += Time.deltaTime; if (Input.GetMouseButton(0) && _timer >= ShootingDelay) { Shoot(); } } private void Shoot() { _timer = 0; Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent<EnemyHealth>(); if (health != null) { health.TakeDamage(1); } } }
}

Here’s the logic for what we added:

  1. We created our time variables to figure out how long we must wait before we shoot again
  2. In Update(), if we waited long enough, we can fire again
    1. Side note: I decided to move all of the shooting code into Shoot()
  3. Inside Shoot(), because the player fired, we’ll reset our timer and begin waiting until we can shoot again.

Adding Player Hit Effects

Setting Up the Game Objects

When we shoot our enemy knight, nothing really happens. He’ll just ignore you and continue walking towards you.

There are a lot of things we can do to make this better:

  1. Add sound effects.
  2. Add damage blood effects.
  3. Push him back.
  4. All of the above.

1) will be added in eventually, 2) might be done, but 3) is what I’m going to implement.

Every time we shoot the knight, we want to push it back. This way if a mob of them swarm at us, we’ll have to manage which one to shoot first.

This little feature took a LONG time to resolve.

The Problem

Whenever we shoot an enemy, we want to push them back, however, the Nav Mesh Agent would override any changes we tried. Specifically, the knight will always continue moving forward.

The Solution

We write some code that changes the velocity of the Nav Mesh Agent to go backwards for a couple of units.

However, when I did that, the knight continued running forward!

Why?

That’s a good question, one that I’m still investigating and hopefully find a solution by tomorrow.

End of Day 13

For the first time ever today, I started on a problem that I couldn’t solve in a day.

I’m expecting this to become more common as we start jumping deeper and deeper.

With that being said, today we fixed the player’s drifting problem by using drag and adding an enemy death animation when they run out of health.

Tomorrow, I’ll continue investigating how I can push the enemy back.

See you all on Day 14! Or whenever I can figure out this knockback problem!

Original Link

100 Days of VR Day 12: Survival Shooter – Creating AI Movements for Enemies in Unity

Here we are on Day 12 of the 100 days of VR. Yesterday, we looked at the power of rig models and Unity’s mechanism system (which I should have learned but ignored in the Survival Shooter tutorial…).

Today, we’re going to continue off after creating our animator controller.

We’re going to create the navigation component to our Knight Enemy to chase and attack the player. As you might recall, Unity provides us an AI pathfinder that allows our game objects to move towards a direction while avoiding obstacles.

Moving the Enemy Toward the Player

Setting Up the Model

To be able to create an AI movement for our enemy, we need to add the Nav Mesh Agent component to our Knight game object. The only setting that I’m going to change is the Speed, which I set to 2.

At this point, we can delete our old enemy game object. We don’t need it anymore.

Next up, we need to create a NavMesh for our enemy to traverse.

Click on the Navigation panel next to the Inspector.

If it’s not there, then click on Window > Navigation to open up the pane.

Under the bake tab, just hit bake to create the NavMesh. I’m not looking to create anything special right now for our character.

Once we finish, we should have something like this if we show the nav that we created.

Make sure that the environment parent game object is set to static!

Creating the Script

At this point, the next thing we need to do is create the script that allows the enemy to chase us.

To do that, I created the EnemyMovement script and attach it to our knight.

Here’s what it looks like right now:

using UnityEngine;
using UnityEngine.AI; public class EnemyMovement : MonoBehaviour
{ private NavMeshAgent _nav; private Transform _player; void Start () { _nav = GetComponent<NavMeshAgent>(); _player = GameObject.FindGameObjectWithTag("Player").transform; } void Update () { _nav.SetDestination(_player.position); }
}

It’s pretty straightforward right now:

  • We get our player GameObject and the Nav Mesh Agent Component.
  • We set the Nav Agent to chase our player.

An important thing that we have to do to make sure that the code works is that we have to add the Player tag to our character to make sure that we grab the GameObject.

After that, we can play the game and we can see that our Knight enemy will chase us.

Using the Attack Animation

Right now, the Knight would run in a circle around us. But how do we get it to do an attack animation?

The first thing we need to do is attach a capsule collider component onto our knight game object and make these settings:

  • Is Trigger is checked
  • Y Center is 1
  • Y Radius is 1.5
  • Y Height is 1

Similar to what we did in the Survival Shooter, when our Knight gets close to us, we’ll switch to an attack animation that will damage the player.

With our new Capsule Collider get into contact with the player, we’re going to add the logic to our animator to begin the attack animation.

First, we’re going to create a new script called EnemyAttack and attach it to our Knight.

Here’s what it looks like:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ Animator _animator; GameObject _player; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } }
}

The logic for this is similar to what we seen in the Survival Shooter. When our collider is triggered, we’ll set our “IsNearPlayer” to be true so that we’ll start the attacking animation and when our player leaves the trigger range, the Knight will stop attacking.

Note: If you’re having a problem where the Knight stops attacking the player after the first time, check the animation clip and make sure Loop Time is checked. I’m not sure how, but I disabled it.

Detecting Attack Animation

Adding a Mesh Collider

So now, the Knight will start the attack animation. You might notice that nothing happens to the player.

We’re not going to get to that today, but we’re going to write some of the starter code that will allow us to do damage later.

Currently, we have a Capsule Collider that will allow us to detect when the enemy is within striking range. The next thing we need to do is figure out if the enemy touches the player.

To do that, we’re going to attach a Mesh Collider on our enemy.

Unlike the previous collider which is a trigger, this one will actually be to detect when the enemy collides with the player.

Make sure that we attach the body mesh that our Knight uses to our Mesh Collider.

I will take note that for some reason the Knight’s mesh is below the floor, however I’ve not encountered any specific problems with this so I decided to ignore this.

Adding an Event to Our Attack Animation

Before we move on to writing the code for when the Knight attacks the player, we have to add an event in the player animation.

Specifically, I want to make it so that when the Knight attacks, if they collide with the player, we’ll take damage.

To do that, we’re going to do something similar to what the Survival Shooter tutorial did. We’re going to add an event inside our animation to call a function in our script.

We have 2 ways of doing this:

  1. We create an Animation event on imported clips from the model
  2. We add the Animation Event in the Animation tab from the animation clip

Since our knight model doesn’t have the animation we added in, we’re going to add our event the 2nd way.

We want to edit our Attack1 animation clip from the Brute Warrior Mecanim pack. inside the Animatortab.

While selecting our Knight Animator Controller, click on Attack1 in the Animator and then select the Animation tab to open it.

If either of these tabs aren’t already opened in your project, you can open them by going to Windows and select them to put them in your project.

Now at this point, we’ll encounter a problem. Our Attack1 animation is read only and we can’t edit it.

What do we do?

According to this helpful post, we should just duplicate the animation clip.

So that’s what we’re going to do. Find Attack1 and hit Ctrl + D to duplicate our clip. I’m going to rename this to Knight Attack and I’m going to move this into my animations folder that I created in the project root directory.

Back in our Animator tab for the Knight Animator Controller, I’m going to switch the Attack1 state to use the new Knight Attack animation clip instead of the previous one.

Next, we’re going to have to figure out what’s a good point to set our trigger to call our code.

To do this, I dragged out the Animation tab and docked it pretty much anywhere else in the window, like so:

Select our Knight object in the game hierarchy and then you can notice that back in the animation tab, the play button is clickable now.

If we click it, we’ll see that our knight will play the animation clip that we’re on.

Switch to Knight Attack and press play to see our attack animation.

From here, we need to figure out where would be a good point to run our script.

Playing the animation, I believe that triggering our event at frame 16 would be the best point to see if we should damage the player.

Next, we need to click the little + button right below 16 to create a new event. Drag that event to frame 16.

From under the Inspector, we can select a function from the scripts attached to play. Right now, we don’t have anything, except for OnTrigger().

For now, let’s create an empty function called Attack() in our EnemyAttack script so we can use:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ Animator _animator; GameObject _player; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } } void Attack() { }
}

All I did was that I added Attack() in.

Now that we have this code, we might have to re-select the animation for the new function to be shown, but when you’re done, you should be able to see Attack() and we should have something like this now:

Updating Our EnemyAttack Script

So now that we finally have everything in our character setup, it’s finally time to get started in writing code.

So back in our EnemyAttack script, here’s what we have:

using UnityEngine;
using System.Collections; public class EnemyAttack : MonoBehaviour
{ private Animator _animator; private GameObject _player; private bool _collidedWithPlayer; void Awake() { _player = GameObject.FindGameObjectWithTag("Player"); _animator = GetComponent<Animator>(); } void OnTriggerEnter(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", true); } print("enter trigger with _player"); } void OnCollisionEnter(Collision other) { if (other.gameObject == _player) { _collidedWithPlayer = true; } print("enter collided with _player"); } void OnCollisionExit(Collision other) { if (other.gameObject == _player) { _collidedWithPlayer = false; } print("exit collided with _player"); } void OnTriggerExit(Collider other) { if (other.gameObject == _player) { _animator.SetBool("IsNearPlayer", false); } print("exit trigger with _player"); } void Attack() { if (_collidedWithPlayer) { print("player has been hit"); } }
}

Here’s what I did:

  1. Added OnCollisionExit() and OnCollisionEnter() to detect when our Mesh Collider comes into contact with our player.
  2. Once it does, we set a boolean to indicate that we’ve collided with the enemy.
  3. Then when the attack animation plays, at exactly frame 16, we’ll call Attack(). If we’re still in contact with the Mesh Collider, our player will be hit. Otherwise, we’ll successfully have dodged the enemy.

And that’s it!

Play the game and look at the console for the logs to see when the knight gets within attacking zone, when he bumps into the player, and when he successfully hits the player.

There’s actually quite a bit of ways we could have implemented this and I’m not sure which way is correct, but this is the thing I have come up with.

Other things that we could have done, but didn’t was:

  1. Made it so that if we ever come in contact with the enemy, whether attacking or not, we would take damage.
  2. Created an animation event at the beginning of Knight Attack and set some sort of _isAttackingboolean to be true and then in our Update(), if the enemy is attacking and we’re in contact with them, the player takes damage, then set _isAttacking to be false, so we don’t get hit again in the same animation loop.

Conclusion

And that’s that for day 11! That actually took a lot longer than I thought!

Initially, I thought it would be simply applying the Nav Mesh Agent like we did in the Survivor Shooter game, however, when I started thinking about attack animations, things became more complicated and I spent a lot of time trying to figure out how to damage the player ONLY during the attack animation.

Tomorrow, I’m going to update the PlayerShootingController to be able to shoot our Knight enemy.

There’s a problem in our script. Currently, whenever we run into an enemy, for some strange reason, we’ll start sliding in a direction forever. I don’t know what’s causing that, but we’ll fix that in another day!

Original Link

Day 10: Survival Shooter – Creating an Enemy

Welcome to a very special day of my 100 days of VR. Day 10! That’s right. We’re finally in the double digits!

It’s been an enjoyable experience so far working with Unity, especially now that I know a bit more about putting together a 3D game now.

We haven’t made it into the actual VR aspects of the game, but we were able to get some foundational skills for Unity, which I’m sure will help translate into the skills needed to create a real VR experience.

We’re starting to get the hang of what we can use in Unity to make a game. Yesterday, we created the beginning of the shooting mechanism.

Currently, whenever we hit something, we just print out what we hit. Today, we’re going to go in and create an enemy player that we can shoot and make some fixes.

Updating the Shooting Code

The first thing I would like to fix is that when we shoot, we shoot at whatever our cursor is pointing at, which is kind of weird.

Locking the Cursor to the Middle

This can be easily fixed by adding:

Cursor.lockState = CursorLockMode.Locked;

To Start() in our PlayerShootingController script:

We’ll have something like this:

void Start () { _camera = Camera.main; _particle = GetComponentInChildren<ParticleSystem>(); Cursor.lockState = CursorLockMode.Locked;
}

Now when we try to play the game, our cursor will be gone. It’ll be in the middle of the screen, we just can’t see it.

Adding a Crosshair

At this point, we want some indicator to show where our “center” is.

To do this, we’re going to create an UI crosshair that we’ll put right in the middle.

In the hierarchy, add an Image which we will call Crosshair. By doing this, Unity will also create a Canvasfor us. We’ll call that HUD.

By default, our crosshair is already set in the middle, but it’s too big. Let’s make it smaller. In the Rect Transform, I set our image to have Width and Height 10, 10.

You should have something like this now:

Image title

Before we do anything else, we need to make sure that our mouse collider doesn’t send a raycast onto our UI elements.

In HUD, attach a Canvas Group component and from there, uncheck Interactableand Blocks Raycasts. As you might recall, the Canvas Group component will allow us to apply these 2 settings to its children without us having to manually do it ourselves.

Go ahead and play around with it. If we observe our console, whenever we fire, we hit where our little “crosshair” is located at.

Creating Our Enemy

So now, we fixed our cursor to be the center, the next thing we need to do is to create an enemy.

We’ll improve upon this, but for now, let’s create our first enemy! A cube!

Add a Cubeto your hierarchy, name it Enemy, and then drag it near our player.

Boom! First enemy!

Image title

Now currently, nothing really happens when you shoot at it, so let’s fix it by adding an enemy health script. We’ll call it EnemyHealth.

Here’s what the code looks like:

using UnityEngine; public class EnemyHealth : MonoBehaviour
{ public float Health = 10; public void TakeDamage(float damage) { Health -= damage; if (Health <= 0) { Destroy(gameObject); } }
}

It’s relatively simple:

  1. We have our health
  2. We have a public function that we’ll call our player hits the enemy that’ll decrease the enemies HP
  3. When it reaches 0, we make our enemy disappear

Now before we update our script, let’s make some optimizations to our raycast.

Go to our Enemy game object and then set its layer to Shootableif it doesn’t exist (which it most likely doesn’t), create a new layer, call it Shootable, and then assign it to the Enemy layout.

Now let’s go back to our PlayerShootingController and grab the EnemyHealth script that we just created and make them take damage:

using UnityEngine; public class PlayerShootingController: MonoBehaviour { public float Range = 100; private Camera _camera; private ParticleSystem _particle; private LayerMask _shootableMask; void Start() { _camera = Camera.main; _particle = GetComponentInChildren < ParticleSystem > (); Cursor.lockState = CursorLockMode.Locked; _shootableMask = LayerMask.GetMask("Shootable"); } void Update() { if (Input.GetMouseButton(0)) { Ray ray = _camera.ScreenPointToRay(Input.mousePosition); RaycastHit hit = new RaycastHit(); if (Physics.Raycast(ray, out hit, Range, _shootableMask)) { print("hit " + hit.collider.gameObject); _particle.Play(); EnemyHealth health = hit.collider.GetComponent < EnemyHealth > (); if (health != null) { health.TakeDamage(1); } } } }
}

The changes we’ve done is very similar to what we have seen before with Survival Shooter, but here’s the addition that we added:

  1. We created our LayerMask for our Shootable layer and passed it into our Raycast function:
    1. Note, I tried to use an int at first to represent our LayerMask, but for some reason, the Raycast ignored the int. From searching around online, I found that instead of using the intrepresentation, we should just try the actual LayerMask object. When I gave that a try, it worked…. So yay?
  2. Next, when we hit an object, which at this point, can only be Enemy, we grab the EnemyHealth script that we added and then we make the enemy take 1 damage. Do this 10 times and the enemy will die.

Now with this script attached to our enemy, shoot our cube 10 times (which should happen really fast), and then BOOM, gone.

Conclusion

And that’s about as far as I got for Day 10! Today was a bit brief, because I didn’t get much time to work, but I think we made some good progress!

We created the basis for an enemy and added a basic crosshair UI that we can use. Tomorrow, I’m going to start looking into seeing how to add an enemy from the Unity Asset Store into the game.

Until then, I’ll see you all on day 11!

Original Link

Centralized Reusable Audio Feedback Mechanisms for Mixed Reality Apps

Intro – Feedback Is Key

Although speech recognition in Mixed Reality apps is very good, sometimes the best recognition fails, or you slightly mispronounced something. Or the command you just said is recognized, but not applicable in the current state. Silence, nothing happens, and you wonder – did the app just not understand me, is the response slow, or what? The result is always undesirable – users wait for something to happen and nothing does. They start to repeat the command and halfway the app executes the first command after all, or even worse – they start shouting, which makes for a quite embarrassing situation (both for the user and bystanders). Believe me, I’ve been there. So – it’s super important to inform your Mixed Reality app’s user that a voice command has been understood and is being processed right away. And if you can’t process it, inform the user of that as well.

What Kind of Feedback?

Well, that’s basically up to you. I usually choose a simple audio feedback sound – if you have been following my blog or downloading my apps you are by now very familiar with the ‘pringggg’ sound I use in every app, be it an app in the Windows Store or one of my many sample apps on GitHub. If someone uses a voice command that’s not appropriate in the current context or state of the app, I tend to give some spoken feedback, telling the user that although the app has understood the command, can’t be executed now and if possible for what reason. Or prompt for some additional action. For both mechanisms, I use a kind of centralized mechanism that uses my Messenger behavior, that already has played a role in multiple samples.

Project Setup Overview

The hierarchy of the project is as displayed below, and all it does is showing the user interface on the right:

If you say “Test command,” you will hear the “pringggg” sound I already described, and if you push the button the spoken feedback “Thank you for pressing this button.” Now, this is rather trivial, but it only serves to show the principle. Notice, by the way, the button comes from the Mixed Reality Toolkit examples – I described before how to extract those samples and use them in your app.

The Audio Feedback Manager and Spoken Feedback Manager look like this:

The Audio Feedback Manager contains an Audio Source that just contains the confirmation sound, and a little script “Confirm Sound Ringer” by yours truly, which will be explained below. This sound is intentionally not spatialized, as it’s a global confirmation sound. If it was spatialized, it would also be localized, and the user would be able to walk away from confirmation sounds or spoken feedback, which is not what we want.

The Spoken Feedback Manager contains an empty Audio Source (also not spatialized), a Text To Speech Script from the Mixed Reality Toolkit, and the “Spoken Feedback Manager” script, also by me.

ConfirmSoundRinger

using HoloToolkitExtensions.Messaging;
using UnityEngine; namespace HoloToolkitExtensions.Audio
{ public class ConfirmSoundRinger : MonoBehaviour { void Start() { Messenger.Instance.AddListener<ConfirmSoundMessage>(ProcessMessage); } private void ProcessMessage(ConfirmSoundMessage arg1) { PlayConfirmationSound(); } private AudioSource _audioSource; private void PlayConfirmationSound() { if (_audioSource == null) { _audioSource = GetComponent<AudioSource>(); } if (_audioSource != null) { _audioSource.Play(); } } }
}

Not quite rocket science. If a message of type ConfirmSoundMessage arrives, try to find an Audio Source. If found, play the sound. ConfirmSoundMessage is just an empty class with no properties or methods whatsoever – it’s a bare signal class.

SpokenFeedbackManager

Marginally more complex, but not a lot:

using HoloToolkit.Unity;
using HoloToolkitExtensions.Messaging;
using System.Collections.Generic;
using UnityEngine; namespace HoloToolkitExtensions.Audio
{ public class SpokenFeedbackManager : MonoBehaviour { private Queue<string> _messages = new Queue<string>(); private void Start() { Messenger.Instance.AddListener<SpokenFeedbackMessage>(AddTextToQueue); _ttsManager = GetComponent<TextToSpeech>(); } private void AddTextToQueue(SpokenFeedbackMessage msg) { _messages.Enqueue(msg.Message); } private TextToSpeech _ttsManager; private void Update() { SpeakText(); } private void SpeakText() { if (_ttsManager != null && _messages.Count > 0) { if(!(_ttsManager.SpeechTextInQueue() || _ttsManager.IsSpeaking())) { _ttsManager.StartSpeaking(_messages.Dequeue()); } } } }
}

If a SpokenFeedbackMessage comes in, it’s added to the queue. In the Update method, SpeakText is called, which first checks if there are any messages to process, then checks if the TextToSpeech is available – and if so, it pops the message out of the queue and actually speaks it. The queue has two functions. First, the message may come from a background thread, and by having SpeakText called from Update, it’s automatically transferred to the main loop. Second, it prevents messages being ‘overwritten’ before they are even spoken.

The trade-off, of course, is that you might stack up messages if the user quickly repeats an action, resulting in the user getting a lot of talk while the action is already over.

On the Count > 0 instead of any – apparently you are to refrain from using LINQ extensively in Unity apps, as this is deemed inefficient. It hurts my eyes to see it used this way, but when in Rome…

Wiring It Up

There is a script SpeechCommandExecuter sitting in Managers, next to a Speech Input Source and a Speech Input Handler, that is being called by the Speech Input Handler when you say “Test Command.” This is not quite rocket science, to put it mildly:

public class SpeechCommandExecuter : MonoBehaviour
{ public void ExecuteTestCommand() { Messenger.Instance.Broadcast(new ConfirmSoundMessage()); }
}

As is the ButtonClick script that’s attached to the ButtonPush:

using HoloToolkit.Unity.InputModule;
using HoloToolkitExtensions.Audio;
using HoloToolkitExtensions.Messaging;
using UnityEngine; public class ButtonClick : MonoBehaviour, IInputClickHandler
{ public void OnInputClicked(InputClickedEventData eventData) { Messenger.Instance.Broadcast( new SpokenFeedbackMessage { Message = "Thank you for pressing this button"}); }
}

The Point of Doing It Like This

Anywhere you now have to give confirmation or feedback, you now just need to send a message – and you don’t have to worry about setting up an Audio Source, a Text To Speech and wiring that up correctly. Two reusable components take care of that. Typically, you would not send the confirmation directly from the pushed button or the speech command – you would first validate if the command can be processed in the component that holds the logic, and then give confirmation or feedback from there.

Conclusion

I hope to have convinced you of the importance of feedback, and I showed you a simple and reusable way of implementing that. You can find the sample code, as always, on GitHub.

Original Link

How Gaming Can Help Us Manage Big Data

We’re in the midst of what the World Economic Forum calls the 4th Industrial Revolution, and this is characterized by the blurring of boundaries between the physical and digital. The Internet of Things means that everything from railway tracks to refrigerators is capable of generating data to help us make more effective use of them.

Indeed, Gartner predicts that around 25 billion things will be connected to the Internet by 2020, with those items collectively generating around 600 zettabytes (ZB) per year. To put that into perspective, that is roughly 275 times more than the traffic flowing from data centers to end users, and 39 times higher than all traffic to and from data centers. In other words, it’s a lot.

In God We Trust…

W. Edwards Deming famously remarked that “in God we trust; all others bring data,” and it seemed like a perfectly adroit comment to make a generation ago because data was in short supply. Now, however, data is in abundance, and we might amend Deming’s quote to reference the importance of insight. When you can see vividly each individual stalk of hay, the value comes in being able to see the needle among them.

A number of companies are attempting to make such insights vivid and visual. For instance, I wrote last year about Beautiful Information, the Nesta-backed startup that has developed an Operational Control Centre app that aims to present data in a more accessible way.

The app aims to transform previously unmanageable data into usable information by displaying it in a visual and real-time way to both managers and clinicians. The app comes with a customizable dashboard so teams and organizations can gain access to the exact data they desire, whether that’s patient waiting time or the throughput of a particular department.

Visual Data

Also taking a visual approach to data analysis is Dundee based startup Wrld. The company has roots in the gaming industry, and attempts to use the kind of engagement mechanics that are commonplace in video games to make enterprise data more accessible.

The company works with a wide range of systems integrators, which enables them to operate in a number of different niches, as their virtual worlds are used in various ways, from visualizing smart cities to understanding office utilization.

For instance, the technology is being used to replicate the new Bloomberg office building in London. While initially, the aim is to help staff find both their way around the new building and find their colleagues, the potential use cases go significantly beyond that. For instance, it’s easy to imagine a time where the smartphone app that accompanies the system is used not only to monitor employee movement patterns but also to access meeting rooms, cafeterias, and other communal facilities.

With an average of 25-30% of rented office space typically going unutilized, this kind of insight could be a welcome boon to a sector that has long struggled to apply a scientific approach to designing the ideal workplace.

The company, which recently moved headquarters to Los Angeles, has three core software development kits to help developers create realistic visualizations, whether of the interior of buildings or cities. These include WRLD Map Designer and WRLD SDK for Unity, which helps developers to build streaming 3D maps for location based projects.

They’re an interesting company, and as the challenge has moved on from collecting enough data to successfully analyzing it, their approach is one that’s well worth keeping an eye on. Check out the video below to see their creations in action.

Original Link

Bridging the Gap – Plugin for Unity and iOS

There is always a need of a way to communicate between Unity and XCode due to the absence of a direct way. When I faced this problem for the first time, I had to spend almost 3-4 days of my app’s development cycle finding the right way to make this bridge. Through this blog post, I am going to share the method so as to help out any other developer like me. Creating a bridge over Unity and iOS requires coding at both ends, so let’s discuss the XCode side first.

According to Unity’s documentation, “Unity iOS supports automated plugin integration in a limited way. All files with extensions .a,.m,.mm,.c,.cpp located in the Assets/Plugins/iOS folder will be merged into the generated Xcode project automatically. However, merging is done by symlinking files from Assets/Plugins/iOS to the final destination, which might affect some workflows. The.h files are not included in the Xcode project tree, but they appear on the destination file system, thus allowing compilation of .m/.mm/.c/.cpp files.”

So, we will create UnityIOSBridge.m and will place this class to the path “Assets/Plugins/iOS.” Unity needs the plugins to be c-named, so it is a good practice to wrap up the methods which need to be called from Unity, inside extern “C.” But there is no hard and fast rule, you can create .m class and write your c-named methods just outside the implementation part and you can call them from Unity. The only constraint is that you can not call these methods if you are building your app on a simulator, as Unity iOS plugins only work on devices.

Let’s do some coding. In the UnityIOSBridge.m class, just write a method that can receive a string and convert it to NSString. Now UnityIOSBridge.m class should look like as follows:

#
import "UnityIOSBridge.h"
void messageFromUnity(char * message) { NSString * messageFromUnity = [NSString stringWithUTF8String: message]; NSLog(@ "%@", messageFromUnity);
}
@implementation UnityIOSBridge
@end

To call the above method from Unity, we have to write a Unity script, so let’s create the file UnityIOSBridge.cs.

using UnityEngine;
using System.Collections;
using System;
//This is needed to import iOS functions
using System.Runtime.InteropServices;
public class UnityIOSBridge: MonoBehaviour { /* * Provide function decalaration of the functions defined in iOS * and need to be called here. */ [System.Runtime.InteropServices.DllImport("__Internal")] extern static public void messageFromUnity(string message); //Sends message to iOS static void SendMessageToIOS() { messageFromUnity("Hello iOS!"); }
}

It’s really as simple as it looks in the above code. Now, to call a method written in Unity script from iOS code, we can call UnitySendMessage (“UnityObjectName”, “UnityObject’sMethodName”, “Your message”). In response, Unity will look for the UnityObject and then call that UnityObject’sMethod to provide the message you passed. Now the UnityIOSBridge.m class should look like this:

#
import "UnityIOSBridge.h"
void messageFromUnity(char * message) { NSString * messageFromUnity = [NSString stringWithUTF8String: message]; NSLog(@ "%@", messageFromUnity);
}
@implementation UnityIOSBridge - (void) sendMessageToUnity { UnitySendMessage(listenerObject, "messageFromIOS", "Hello Unity!"); }
@end

And the Unity script UnityIOSBridge.cs should look like this:

using UnityEngine;
using System.Collections;
using System;
//This is needed to import iOS functions
using System.Runtime.InteropServices;
public class UnityIOSBridge: MonoBehaviour { /* * Provide function decalaration of the functions defined in iOS * and need to be called here. */ [System.Runtime.InteropServices.DllImport("__Internal")] extern static public void messageFromUnity(string message); //Sends message to iOS static void SendMessageToIOS() { messageFromUnity("Hello iOS!"); } //Provides messages sent from iOS static void messageFromIOS(string message) { Debug.Log(message); }
}

This was a very simple requirement, but what if we want to do something more? For example, our plugin should be able to notify Unity about the UIApplication delegate calls. There is no need to be worried, as we are going to implement that also, but to do that, we have to do some workaround. Objective-C is a runtime-oriented language, which means that when possible, it defers decisions about what will actually be executed from compile and link time to when it’s actually executing on the runtime.

This gives you a lot of flexibility in that you can redirect messages to appropriate objects as you need to, or you can even intentionally swap method implementations, etc. This requires the use of a runtime which can introspect objects to see what they do and don’t respond to and dispatch methods appropriately. So, we will take a simple example of “application: didFinishLaunchingWithOptions:” delegate method UIApplicationDelegate. We will be creating a category class of UIApplication and will implement a load method. In the load method, we will exchange the setDelegate method implementation of UIApplication with the method setApp42Delegate method of our class, as follows:

+(void) load { method_exchangeImplementations(class_getInstanceMethod(self, @selector(setDelegate: )), class_getInstanceMethod(self, @selector(setApp42Delegate: )));
} - (void) setApp42Delegate: (id) delegate { static Class delegateClass = nil; if (delegateClass == [delegate class]) { return; } delegateClass = [delegate class]; exchangeMethodImplementations(delegateClass, @selector(application: didFinishLaunchingWithOptions: ), @selector(application: app42didFinishLaunchingWithOptions: ), (IMP) app42RunTimeDidFinishLaunching, "v@:::"); [self setApp42Delegate: delegate];
} static void exchangeMethodImplementations(Class class, SEL oldMethod, SEL newMethod, IMP impl, const char * signature) { Method method = nil; //Check whether method exists in the class method = class_getInstanceMethod(class, oldMethod); if (method) { //if method exists add a new method class_addMethod(class, newMethod, impl, signature); //and then exchange with original method implementation method_exchangeImplementations(class_getInstanceMethod(class, oldMethod), class_getInstanceMethod(class, newMethod)); } else { //if method does not exist, simply add as orignal method class_addMethod(class, oldMethod, impl, signature); }
} BOOL app42RunTimeDidFinishLaunching(id self, SEL _cmd, id application, id launchOptions) { BOOL result = YES; if ([self respondsToSelector: @selector(application: app42didFinishLaunchingWithOptions: )]) { result = (BOOL)[self application: application app42didFinishLaunchingWithOptions: launchOptions]; } else { [self applicationDidFinishLaunching: application]; result = YES; } [ [UIApplication sharedApplication] registerForRemoteNotificationTypes: (UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound | UIRemoteNotificationTypeAlert) ]; return result;
}

Let’s walk through the above code snippets: the method “setApp42Delegate” calls our “exchangeMethodImplementations” that adds the “app42RunTimeDidFinishLaunching” to the UIApplication class and exchanges the implementations of “app42RunTimeDidFinishLaunching” with “application: didFinishLaunchingWithOptions:” if it exists. This way, we can have access to all the UIApplicationDelegate methods, such as “applicationDidEnterBackground:”, “application:didRegisterForRemoteNotificationsWithDeviceToken:”, etc, without making changes directly to the Unity-generated Xcode project. You can download the source code of our Unity plugin for iOS push notifications from this Git Repo.

Original Link

Getting to Know Unity

This is an excerpt from the first chapter of Unity in Action, Second Edition. Save 37% with the code fcchunter.

How to Use Unity

Let’s jump into what the Unity interface looks like and how it operates (that is what you’re here for, isn’t it?). If you haven’t done it already, download the program from www.unity3d.com and install it on your computer (be sure to include “Example Project” if it’s unchecked in the installer). After you install it, launch Unity to start exploring the interface.

You probably want an example to look at; open the included example project. A new installation should open the example project automatically, but you can also select File > Open Project to open it manually. The example project is installed in the shared user directory, which is something like C:\Users\Public\Documents\Unity Projects\ on Windows, or Users/Shared/Unity/ on Mac OS. You may also need to open the example scene. Double-click the Car scene file (highlighted in figure 1; scene files have the Unity cube icon) found by going to SampleScenes/Scenes/ in the file browser at the bottom of the editor. You should see a screen similar to figure 1.

Image title

Figure 1: Parts of the interface in Unity.

The interface in Unity is split up into different sections: the Scene tab, the Game tab, the Toolbar, the Hierarchy tab, the Inspector, the Project tab, and the Console tab. Each section has a different purpose, but all are crucial for the game building lifecycle:

  • You can browse through all the files in the Project tab.
  • You can place objects in the 3D scene being viewed using the Scene tab.
  • The Toolbar has controls for working with the scene.
  • You can drag and drop object relationships in the Hierarchy tab.
  • The Inspector lists information about selected objects, including linked code.
  • You can test playing in Game view while watching error output in the Console tab.

This is the default layout in Unity; all of the various views are in tabs and can be moved around or resized, docking in different places on the screen. Later you can play around with customizing the layout, but for now the default layout is the best way to understand what all the views do.

Scene View, Game View, and the Toolbar

The most prominent part of the interface is the Scene view in the middle. This is where you can see what the game world looks like and move objects around. Mesh objects in the scene appear as, well, the mesh object (defined in a moment). You can also see a number of other objects in the scene, represented by various icons and colored lines: cameras, lights, audio sources, collision regions, and so forth. Note that the view you’re seeing here isn’t the same as the view in the running game—you’re able to look around the scene without being constrained to the game’s view.

A mesh object is a visual object in 3D space. Visuals in 3D are constructed out of lots of connected lines and shapes; hence the word mesh.

The Game view isn’t a separate part of the screen, but rather another tab located right next to Scene (look for tabs at the top left of views). A couple of places in the interface have multiple tabs like this; if you click a different tab, the view is replaced by the new active tab. When the game is running, what you see in this view is the game. It isn’t necessary to manually switch tabs every time you run the game, because the view automatically switches to Game when the game starts.

TIP: While the game is running, you can switch back to the Scene view, allowing you to inspect objects in the running scene. This capability is hugely useful for seeing what’s going on while the game is running and it’s a helpful debugging tool which isn’t available in most game engines.

Speaking of running the game, it’s as simple as hitting the Play button above the Scene view. That whole top section of the interface is referred to as the Toolbar, and Play is located right in the middle. Figure 2 breaks apart the full editor interface to show only the Toolbar at the top, as well as the Scene/Game tabs right underneath.

Image title

Figure 2: Editor screenshot cropped to show Toolbar, Scene, and Game.

At the left side of the Toolbar are buttons for scene navigation and transforming objects—how to look around the scene and how to move objects. I suggest you spend some time practicing looking around the scene and moving objects, because these are two of the most important activities you’ll do in Unity’s visual editor. The right side of the Toolbar is where you’ll find drop-down menus for layouts and layers. As mentioned earlier, the layout of Unity’s interface is flexible, and the Layouts menu allows you to switch between layouts. As for the Layers menu, it’s advanced functionality that you can ignore for now.

Using the Mouse and Keyboard

Scene navigation is primarily done using the mouse, along with a few modifier keys used to modify what the mouse does. The three main navigation maneuvers are Move, Orbit, and Zoom. The specific mouse movements for each are described in appendix A at the end of this book, because they vary depending on what mouse you’re using. The three different movements involve clicking-and-dragging while holding down some combination of Alt (or Option on Mac) and Ctrl. Spend a few minutes moving around in the scene to understand what Move, Orbit, and Zoom do.

TIP: Although Unity can be used with one- or two-button mice, I highly recommend getting a three-button mouse (and yes, a three-button mouse works fine on Mac OS X).

Transforming objects is done through three main maneuvers, and the three scene navigation moves are analogous to the three transforms: Translate, Rotate, and Scale (figure 3 demonstrates the transforms on a cube).

Image title

Figure 3: Applying the three transforms: Translate, Rotate, and Scale. (The lighter lines are the previous state of the object before it was transformed).

When you select an object in the scene, you can then move it around (the mathematically accurate technical term is translate), rotate the object, or scale how big it is. Relating back to scene navigation, Move is when you Translate the camera, Orbit is when you Rotate the camera, and Zoom is when you Scale the camera. Besides the buttons on the Toolbar, you can switch between these functions by pressing W, E, or R on the keyboard. When you activate a transform, you’ll notice a set of color-coded arrows or circles appears over the object in the scene; this is the Transform gizmo, and you can click-and-drag this gizmo to apply the transformation.

A fourth tool is next to the transform buttons. Called the Rect tool, it’s designed for use with 2D graphics. This one tool combines movement, rotation, and scaling. These operations have to be separate tools in 3D but are combined in 2D because there’s one less dimension to worry about. Unity has a host of other keyboard shortcuts for speeding up a variety of tasks. For now, let’s move on to the remaining sections of the interface!

The Hierarchy Tab and the Inspector

Looking at the sides of the screen, you’ll see the Hierarchy tab on the left and the Inspector on the right (see figure 4). Hierarchy is a list view with the name of every object in the scene listed, with the names nested together according to their hierarchy linkages in the scene. It’s a way of selecting objects by name instead of hunting them down and clicking them within Scene. The Hierarchy linkages group objects together, visually grouping them like folders and allowing you to move the entire group together.

Image title

Figure 4: Editor screenshot cropped to show the Hierarchy and Inspector tabs.

The Inspector shows you information about the currently selected object. Select an object and the Inspector is then filled with information about that object. The information shown is pretty much a list of components, and you can even attach or remove components from objects. All game objects have at least one component, Transform, and you’ll always see information about positioning and rotation in the Inspector. Objects often have several components listed here, including scripts attached to that object.

The Project and Console Tabs

At the bottom of the screen, you’ll see Project and Console (see figure 5). As with Scene and View, these aren’t two separate portions of the screen but rather tabs that you can switch between. Project shows all the assets (art, code, and so on) in the project. Specifically, on the left side of the view is a listing of the directories in the project; when you select a directory, the right side of the view shows the individual files in that directory. The directory listing in Project is similar to the list view in Hierarchy, but whereas Hierarchy shows objects in the scene, Project shows files that may not be contained within any specific scene (including scene files—when you save a scene, it shows up in Project!).

Image title

Figure 5: Editor screenshot cropped to show the Project and Console tabs.

TIP: Project view mirrors the Assets directory on disk, but you generally shouldn’t move or delete files directly by going to the Assets folder. If you do those things within the Project view, Unity keeps in sync with that folder.

The Console is the place where messages from the code show up. Some of these messages are debug output deliberately placed by you, but Unity also emits error messages if it encounters problems in the script you wrote.

That’s all for now! For more on game dev with Unity, download the free first chapter of Unity in Action, Second Edition and see this Slideshare presentation.

Original Link