So then we come to the next pillar, which is the accuracy. So we kind of hear what is important now, right? Now we want to ask the question, like how can we help the player to guide them? Like where to look, where to go, how to anticipate stuff. And pretty much we use these three tools, obstruction of occlusion, distance and space, and 3D audio. I want to talk about that for a little bit. So our occlusion system, we struggled for a long time because we had a rather simple obstruction occlusion computational system, which was a raycast to do obstruction. But you can get unlucky, right? You can just hit that guy. And if there was an enemy behind it, it would be occluded. So we cannot drive too much through that, or you do randomized, or you do certain other things. Our implementation was always very simple and we ran into problems.
And then the occlusion, which means that a sound is completely encapsulated in a different area, we also never really had time to implement the fault system. So it was more an on off switch that we said than having any dynamic range on that sound. So we had some dynamic range here, but we couldn't drive it too much because it was sometimes random and we didn't have too much there. And that's pretty much what I just said. So it was pretty black and white. It was really hard to anticipate danger and to help the player to understand what is behind a wall and I don't have to care, and what is behind a palm tree and I have to care. And then we had this special scenario where a pawl, our sound supervisor, was constantly saying, I swear there was someone around me and I couldn't find them. And then we figured out again through Shadow Play that the person was just upstairs running around upstairs. And there was an obstruction obviously, but he was so close and we couldn't touch it too much for other scenarios that we were really not happy with that system.
At some point our lead engine programmers were showing technology that we were using for Overwatch for flight path data. And somehow all in the sound department said, that would be so cool if instead of shooting rays we know about the path that we would take to a sound. So that's kind of what we did. In this top down example, right, we hear the listener there as a sound, we do a raycast and then we ask this AI data for how long is the path? And there might be a past diversion of a few percentage. So if this is like a 20 meter raycast and it's like a few meters longer, then we get a value out of that. If it's six meters longer, we get a past diversion of 30% and we can take it to an extreme where we still do the simple raycast and then we do a path calculation. And if that suddenly is, let's say, 40 meters long, then we have a fully occluded sound. And Weise provides an obstruction and occlusion system. It lives on the project settings.
And Scott will talk in a second about the pros and cons of that. What I'm getting at is we now have a really fluent value in a certain dynamic range where it helps us to anticipate threat coming towards you. Yeah, so utilizing this value, we started with the WISe project settings and we found that it worked really well, but it was hard to tune across different types of sounds. Again, since we sort of had that categorized organizational structure within WISe, we wanted to be able to tune things on a general level, like footsteps have a certain tuning or weapons have a certain tuning. But this being in the project settings didn't allow us that. We were able to maybe do some of that through a ratio. We were tuning in our tools, but it was very unwieldy and hard to manage and very bug-prone.
Instead, we went back to RTPCs, which are our friend for this project, and we used probably way too many. So the occlusion is the same thing. It goes from zero, like zero occlusion here, to full occlusion. And we could drive all those same parameters. We could drive low pass filter, high pass filter, and volume, just like you can in the project settings version of occlusion. But then we could also drive other things, like we were driving aux sends. More or less sends to reverbs or more or less sends to our quad delay. That's also driven by that same thing. And again, now it's nice and easy to say, okay, footsteps have this type of curve, and it's driving all these different parameters here. But 3P Weapon Fire has a very different side of curves. So I'll show a couple of these things in the tool.
Which was that? Thanks. So under 3P Weapon Fire, you'll see that we're driving the high pass and low pass on this as well. So the more occluded it gets, you don't necessarily want it to just muffle it. You also want to lose some low frequencies. So that's these two curves. You're then driving the aux sends. So in this case, we're actually turning up the aux sends, because we like the way that sounded. And in this case, we kind of did this on different sounds in different ways. So we use it in various creative ways to drive different things. It did have one major drawback though. So especially in regards to the low pass filter.
So the other system was an independent low pass that didn't conflict with other low passes in the project. Where this one is part of an additive chain of low passes. So what I mean there is that certain sounds like carpet footsteps. We would take a concrete footstep and then just go duplicate that down to save on wave data. We would duplicate that and add 20 or 40 low pass to the sound. So you get a muffled version of the same footstep sound. When we would do that, and then this curve would come along. Those numbers are very sensitive. It's very, the really active values are between 20 and maybe 50, 60.
Where like your listening curve is. And this would come along and all of a sudden you'd have a carpet footstep and you'd go barely behind a wall and it would disappear entirely. Because these numbers would add together and you wouldn't be able to hear the sound in that case. So what we had to do actually is we had to go through all these carpet footstep sounds and put in real time EQ effect plug-ins to break this relationship between these two different types of low passes that we wanted to drive.
Alright, let's go on back. So overall I think this is maybe one of the most powerful features we added to the game. As soon as we put it in there was this really amazing feeling of safety. You would go into a cubby hall and everything would just get quiet and muffled. And you would hear clearly one person walk in the room or when you would go back out the whole mix would sort of unfold. And it did it in a smooth transitional way that felt very natural. So you could almost play by listening to the walls a little bit.
You would kind of go around and as you would come to this curve you would hear this blossoming. And it would help you ignore enemies that can't hurt you. So they may be physically close but this guy who's right under the stage for example has to walk over there and around and up to come get me. Well even though they're physically close we don't really want to hear the sound that loud because of the path. So we have some examples here. So here's footsteps and weapons. You'll hear a widow maker here just walk around the corner and back. And just notice how the footsteps have very different curve from the weapon.
And you'll be able to anticipate when she's going to be visible and not. So you can still hear that weapon but you can't hear footsteps at all. So you can basically hear how far away she is from that location. And you can use that information to anticipate her coming and to react to her. So the next video example is of a full battle. So this will show when I go back into the cubby hole how everything kind of dims down and gets a lot quieter and you feel a sense of safety. And you can see the wind and the wind and the wind and the wind and the wind and the wind and the wind and the wind and the wind.
You'll even notice if you listen closely that it's high noon that line had a very distinct curve that more or less ignored the walls because those are so key to the gameplay. Hearing him say that that's his ultimate ability. When when the McCree cowboy says that he had basically one shot kill anyone in sight. So as soon as you hear that line you had to react and hide and react accordingly. We purposely avoided some of the occlusion values on things like ultimate ability lines and sounds so you could react to them regardless.
Yeah, I think there is some stuff you want to talk about about just in general. Yeah, so in general sounds in the game we did we took a approach that is becoming obviously very common, especially with the proliferation of wise. We layered our sounds based on distance. So you hear like the distance layers when you're far away and then the close layers would come in and the mech layer would come when you're right on top of it. We do indoor outdoor tail switching. We're doing filtering of low pass and high pass filters over distance. We're even utilizing this feature called focus and spread which we co-developed with wise which actually they developed and we just asked for it. But the spread was essentially a way to try and make things very stereo but still pan around the field.
是的,我觉得你有一些想要谈论的话题。在游戏中,我们采用了一种现在非常普遍的声音处理方法,特别是在wise(Wwise)这种工具日益普及的背景下。我们根据距离对声音进行了分层:当你远离声音源时,会听到远距离的声音层,而当你靠近时,会听到近距离的声音层和机器人的声音层。我们还进行室内外尾音切换,并根据距离应用低通和高通滤波器过滤声音。我们甚至使用了一个名为“焦点和扩展”(Focus and Spread)的功能,这是我们与wise共同开发的,实际上主要是他们开发的,而我们只是提出了这个需求。这个扩展功能的主要目的是使声音在立体声的情况下仍能在声场中自由移动。
We ended up asking for this feature but didn't end up using it as much as we thought because the pinpoint accuracy of sounds was kind of took precedence over what we would have aesthetically liked. And that's kind of a thing that I want to bring across. You know, through most of the other games in my career it was always aesthetics above all and you wanted to have a cinematic huge feel. But in this game it was very much like, okay, it may not sound cinematic to have this wide stereo spread but it's better for the game to say this person's right there. And so we'd narrow things in despite our instincts I suppose. And then finally the last thing I want to talk about in this slide is the reverb and quad delay. We use that to place things in the world and we have some good examples of that.
So this is a screenshot top down of our character Zarya. She's shooting a beam out in front of her so you have some reference point. What we do is we actually ray trace out to each of the walls that are surrounding her and get an idea of how far away these walls are and what type of space, you know, distance wise she is in. So then we use these values and we made a custom plugin with the help of a contractor. And we took four delay lines into a plugin and allowed it to pan to each of the surround speakers front left, front right, rear left, rear right. And when we drove all these parameters via RTPC based on how close a wall was to you.
So this kind of gave an automatic early reflection type of simulation not exactly 100% real world but close enough that it felt like you were able to do it. It felt like you were in the space and we could run many sounds through this plugin through the aux bus structure. So I'll show you some examples of the RTPCs that we set up on this because it drives a ton of different data through to get the sound. So if I go look up here and I find our aux buses under 2D audio and I expand and find our quad delay. We have two different versions. We have one for loud sounds that would echo in the environment or quiet sounds like footsteps that wouldn't echo as far.
And then the effect is driven by all of those parameters. So if I go over to the RTPC on this and I normally sort these by some notes we put in here so we could see. So like there's like the three delay taps and notch frequencies. And if I select all these, that's all the parameters that are being driven based on just those four distances. Some of these are delay times, some of these are volumes, some of these are low pass or high pass of that individual delay line. There's a notch frequency that we're driving. And all of these things basically make it so there's a bright reflection. If it's close, if this wall is close here, I'll get bright reflections from the side.
If that wall over there is far away, I get a more muted dull echo type of reflection from that distance. So there's some video examples of this that I'd love to show you. Where'd it go? Yeah. Okay, so this starts with only the effect. So you won't hear any dry signal. So it's going to sound a little strange. And then you bring, we'll see the same thing again with the dry signal brought in. And you'll see how it puts a center space into the sound. So the only thing you're hearing there is this stereo version of that quad delay plug in. So you can hear those distinct, distant slapback delays.
And we don't have to modulate there, do any authoring in that. It's kind of set up once we worked with the curves and the RTPCs a lot. But once it was set up, we don't have to change this from a level to level basis. And then I also want to talk a bit about Dolby Atmos. We were fortunate enough to work with Dolby and Audio Kinetic to bring this for the, to be the first game to use Dolby Atmos for headphones. It was something that came about at GDC of 2015 where we heard a demo of their HRTF technology where they took an atmos mix of our cinematic and rendered it into a headphone mix.
And I heard it, and in the video, there was a widow maker that grapples up to the edge and shoots down at the scene. And you could hear in this headphone mix clear behind and up to the right. And I was like, okay, I could hear her there. That'd be great to work with in the game. So we started talking to him around that time and it was a lot of back and forth. We tried different means of implementing Listen to Wise. And I think we came up with something that's really great. And I could show you a bit of how that's actually set in the engine because we talked about the bus structure earlier.
我听到了,在视频中,有一个“寡妇制造者”角色抓住边缘,然后向下方射击。在耳机中听得很清楚,声音来自后方偏右的位置。我心想,好,我能听到她的声音,这对于游戏合作来说是个不错的体验。于是我们开始在那段时间与他沟通,来回讨论了很多。我们尝试了不同的方法来实现 Listen to Wise。我认为我们想出了一个很出色的解决方案。我可以向你展示一下这个是如何在引擎中设置的,因为我们之前讨论过总线结构。
So this division between 2D and 3D audio was done right before we shipped for that very reason. Right here is the Dolby headphone virtualizer. And this bus is set to, in real time when the game flips its mode on, this bus becomes a 12, or sorry, or 7.1.4 bus configuration. That then that gets fed to the virtualizer and the headphone mix, you know, you can hear things pan around, above, around, and behind you. For me it's been a huge bonus in playing the game. Once, especially, like it's cool right off the bat, but as you get used to it, you can just pick up all these little details and you're like, oh he's right above me. You just know that he's there or this guy's behind me.
And to get that through headphones that anyone owns, by the way, this is regular stereo headphones. You don't need to buy anything for this to run. It's best if it's an analog jack, because that way you don't have any virtualization going on. If you have a 7.1 or 5.1 pair of headphones, you should turn off that virtualization to use this feature. But it's been a really great thing for us. It's helped with that goal of trying to play by sound a lot. Yeah, and then it was a whole pillar of pinpoint accuracy, right? So these were the three things we talked about, occlusion obstruction, with a middle one, quad delay, and then the atmosphere.