Design and Implementation of Sound Source Localization Method for Robot Motion Control

 

Research Article

Design and Implementation of Sound Source Localization Method for Robot Motion Control

Corresponding author: Dr. Xianfeng Yuan, School of Control Science and Engineering, Shandong University, Jinan 250061, China, Tel: +86-13789817696; Email: yuanxianfeng_sdu@126.com

Abstract

In this paper, we design a novel sound source localization method for robot motion control. Time difference of arrival (TDOA), serial communication, fuzzy motion control and other technologies are combined in this design. Using this proposed method, our robot platform is able to know the location information of the talker and achieve automatic steering, which provides a better interactive experience. Numerous experiments and practical applications show that the designed method is stable and flexible with high precision and good real-time performance. Therefore, it satisfies the requirements of motion control precision and real-time performance of the home service robot.

Keywords: Service Robot; Sound Source Localization; TDOA; Microphone Array; Fuzzy Control 

Introduction

With the rapid development of the robot technology and the urgent needs of the society, home service robot has gradually entered people’s daily life and provided services in diverse areas, such as home security, entertainment, elder care, early education and so on [1-3]. The autonomous movement is essential to provide a better interactive experience. However, most service robots currently on the market have no wheels and cannot move. What’s more, although a few robots are able to move, their motion control strategies are based only on Apps or remote controllers, which is relatively rigid, inflexible and unable to meet user’s requirements for a better human- robot interactive experience. On the other hand, the sound source localization technology [4] is getting more and more attentions in many fields.

Stachurski et al. proposed a prototype solution that adds the sound localization capability to a surveillance camera so that it can point in the direction of interest [5]. Pan et al. designed a multi-channel data acquisition system to detect the acoustic emission signal from internal damage and fracture of the rock or mountain [6]. Sun et al. investigated the use of acoustic emission (AE) for simulated crack in a stainless steel pipeline

and proposed an approach to liner sound source localization in pipeline using two or more acoustic sensors [7]. Bandi et al. dealt with the detection of gunshots using microphone sensor arrays placed in different locations which are processed using MATLAB [8].

To solve the problems mentioned above, this paper designs a novel sound source localization method for robot motion control. Based on this method, our robot is able to achieve autonomous steering and forward movement combined with obstacle avoidance module and human pyroelectric detection module. Experimental results indicate that this system can achieve flexible and accurate movement based on the proposed method, which offers a better human-robot interactive experience. It satisfies the requirements of motion control precision and real-time performance of the home service robot.

The remainder of this paper is organized as follows. Section 2 briefly introduces the overall design scheme of this system. Section 3 illustrates the sound source localization principle. In Section 4, the motion control strategy is introduced. Experimental results are discussed in Section 5. Section 6 is devoted to conclusion.

 

Recognition

Amplification

The experimental platform in this paper is based on the home service mobile robot developed by our research group [9]. The motion control system mainly consists of sound source localization module, serial communication module, motion control module, obstacle avoidance module, human pyroelectric detection module, etc. The overall framework of the robot motion control system is illustrated in Figure 1.

The proposed sound source localization method mainly consists of three steps. Firstly, voice signal input devices acquire voice signals and send them to the sound source localization module. Secondly, based on the information send by the sound source localization module, the host computer calculates the voice angle and sends it to the lower computer via the serial port. Thirdly, the motion control module uses this information to achieve autonomous steering combined with obstacle avoidance module and human pyroelectric detection module.

The process of sound source localization mainly includes three parts: sound single capture and conversion, pretreatment and localization, as shown in Figure 2. Firstly, sound signals are converted to electrical signals through the microphone array. Signals are then amplified and sent to the upper computer. Angle calculation and motion control are achieved eventually.

Sound source localization principles

The sound source localization refers to the voice signal acquisition using the microphone array, and then the digital signal processing is done to the signal so that we can acquire the sound source angle information. With the rapid development of information technology, it has been used in many fields.

According to the different localization principles, the localization methods can be classified into three types in general. The first method is the high-resolution spectrum estimation localization algorithm. This method has good performance in the far field localization while its precision may greatly be affected in spatial correlated noise fields and it has high computational complexity. The second method is based on controllable beamforming. This algorithm is a kind of maximum likelihood estimation essentially and it requires the priori information of sound source and noise. Meanwhile, the computational load of this method is extremely heavy. The third one is based on the TDOA location algorithm [10] which is also one of the most commonly used methods at present. This method mainly includes two steps. The first step is to receive the sound source signal and calculate the relative time delay of each microphone. The second step is to estate the relative position and angle information based on the time difference of sound arrival. The advantages of TDOA are simple, small amount of calculation, without the priori information of sound source and noise.

According to different structures, the microphone array can be divided into linear array, area array and three-dimensional array [11]. Linear array means that all microphones are placed into a straight line while it can only locate sound source signal within half plane. Area array refers to a planar array with all microphones mounted in a plane. It is able to locate sound source signal in the half space. Three-dimensional array can locate the sound source from any point within a certain space which is mainly based on the TDOA algorithm. This paper adopts the uniform circular array as illustrated in Figure 3. There are five microphones vertically fixed on a circular base whose radius is 30mm.

Figure 3. Circular microphone array distribution.

The relative position relation between the microphone array and the sound source is calculated by the following equations. As shown in Figure 3, given the distances between the sound source s and mic1, mic2 are r1, r2 respectively, and the distance

between mic1 and mic2 is d.

So we can obtain:

s

Figure 4. Spherical wavefront

r1

mic1

1

r2

d

mic2

2

mic1 d

mic2

Figure 5. Plain wavefront

In this paper, the sound source localization is achieved by the TDOA localization algorithm. The geometric model of the microphone array is illustrated in Figure 6. Experimental results indicate that the precision of angle localization is within ±7°. Therefore, it satisfies the requirements of accurate autonomous movement for the indoor robot and provides a novel sound source localization method for home service robot.

Microphone

As the distance between the sound source and microphone array tends to infinite, the spherical wavefront (Figure 4) tends to the plain wavefront, as shown in Figure 5.

According to the analysis above, the sound source can be divided into near field and far field sound source based on the wavefront types. The microphone array is mainly aimed at the near field sound source, so it needs to satisfy Eq.

Where r represents the distance between the sound source and the microphone array, L is the size of the microphone array and λ donates the wavelength of the sound source signal.

Motion control strategy

The whole robot motion control process is as illustrated in Figure 7. Firstly, the robot completes the initialization and waits for the user to waken it. Secondly, the sound localization information is calculated based on the acquired signal and sent to the host computer. Thirdly, the motion instructions are sent to the lower computer via the serial port and the accumulation is done to the speed values of the wheel until it achieves the set value. Finally, it will run towards the user and stop when it

In our control strategy, the switching threshold is designed as shown in Eq. (5). Given α+β=1 and concrete values are determined by the test. When the value is greater than threshold, it will switch to the fuzzy controller, otherwise it will choose the PID controller. The switching threshold not only considers the error but also the error change rate, which avoids the PID overshoot under the condition of the error is small but error change rate is very big.

(5)

detects the obstacle or person.

T k T

(6)

U (k)  Kp{e(k)  e( j)  d e(k)  e(k 1)}

In order to make the robot realize accurate steering at all angles, we adopt the subsection controller to control the steering process. Its core design thought is to switch between fuzzy controller [12] and PID controller based on error and error change rate, which exerts their advantages in different stages.

In the case of small error and error change rate, we use the PID controller that has stable performance and high control precision. In the primary stage, fuzzy controller is adopted to exert its advantages of rapidity and strong anti-jamming ability. This strategy improves the speed control accuracy and makes the motion more fluent and natural eventually.

Motor speed

+ Deviation

Desired angle-

Motor

Actual angle

PID controller

Fuzzy controller

d/dt

Multiport switch

Ti j 1 T

Eq. (6) describes the discrete form of PID controller, where T

is the sampling time, Td is the differential time constant, Ti denotes the integration time constant, Kp represents the proportional coefficient, e(k) and U(K) definite the angle error and controller output at time K respectively.

In our design, angle error E and its change rate EC are as the input of fuzzy controller and the control variable U is motor speed. The fuzzy language values of E, EC and U are as follows.

E = {PB, PM, PS, PO, NO, NS, NM, NB}

EC = {PB, PM, PS, O, NS, NM, NB}

U = {PB, PM, PS, O, NS, NM, NB}

We divide the universe based on the language values respectively and the membership functions of E, EC, U are as follows (Figure 9-11) where overlapping coefficient is 0.5 and the membership functions are triangle functions.

Figure 2. Membership function of E.

Figure 3. Membership function of EC.

We obtain the table of fuzzy control rule as described in Table 1 and then use the centre-of-gravity (COG) method for the defuzzification. The formula of COG is as shown in Eq. (7).

Cite this article: Xianfeng Yuan. Design and Implementation of Sound Source Localization Method for Robot Motion Control. J J Electronics Comm. 2016. 1(1): 001.

u  u u (u)udu

u u (u)du

0

(7)

Figure 11. Membership function of U.

EC

E

PB PM PS O NS NM NB
PB NB NB NM NS NS NS NS
PM NB NM NS NS NS NS NS
PS NM NS NS NS O O O
PO NS NS O O O PS PS
NO NS NS O O O PS PS
NS O O O PS PS PS PM
NM PS PS PS PS PS PM PB
NB PS PS PS PS PM PB PB

Table 1. Fuzzy control rule.

Experimental results analysis

Finally, to confirm the effectiveness of the supposed method, we measured the localization angle and steering angle of the robot at the real angle of 10°, 45°, 90°, 120°, 180°, respectively in our lab. The distance between the source and robot are 1m, 3m, 5m respectively. We repeated the test at each angle and distance five times and calculated the mean and variance to test the repeatability of the results. As shown in Figure 12, the experiments were carried out on the home service mobile robot developed by our research group and Figure 13 illustrates the experimental circular microphone array. Experimental results are shown in Table 2 in which Error1 and Error2 reflect the errors between localization angle and real sound angle, actual steering angle respectively. The values in Table 2 are the mean values of five repeated tests.

Figure 5. Experimental robot.

Figure 6. Experimental circular microphone array.

From Table 2 we can find that the mean values of Error1 are 1.16°, 3.64°, 6.08° when distance are 1m, 3m, 5m respectively. In other words, the localization error increases with the distance while the max error is no more than 7°, which shows that the proposed localization method has very high accuracy. Meanwhile, the actual steering error always stays within ±5°, not increasing with the distance. So we can get the conclusion that the presented localization method has good performance and meets the requirements of home robot motion control precision. What’s more, the Variance1 and Variance2 stay within 0.8 and 0.4 respectively, which verifies the repeatability of the method.

Distance (mm) Real angle Localization angle Error1 Variance1 Steering angle Error2 Variance2
1000 10° 11.4° 1.4° 0.32 10.4° -1° 0.24
45° 46.2° 1.2° 0.98 45.6° -0.6° 0.24
90° 91.6° 1.6° 0.24 90.8° -0.8° 0.16
120° 120.6° 0.6° 0.64 120° -0.6° 0
180° 181° 0.8 180.4° -0.6° 0.24
10° 13° 0.8 12.4° -0.6° 0.24
45° 48° 0.8 47.4° -0.6° 0.24
3000 90° 94.6° 4.6° 0.24 94.2° -0.4° 0.16
120° 123.4° 3.4° 0.64 123° -0.4° 0.4
180° 184.2° 4.2° 0.56 183.8° -0.4° 0.24
10° 15.2° 5.2° 0.56 14.6° -0.6° 0.24
45° 50.4° 5.4° 0.64 50.2° -0.2° 0.4
5000 90° 96.4° 6.4° 0.24 96° -0.4° 0.4
120° 126.8° 6.8° 0.64 126.6° -0.2° 0.24
180° 186.6° 6.6° 0.24 186° -0.6° 0.4

Table 2. Precision and repeatability test of the proposed method.

Conclusions

In view of the problems existing in the autonomous movement of service robot, this paper designs a novel sound source localization method for robot motion control. It improves the independent and automating level of the robot by achieving the function of autonomous steering and forward movement when the user talks to it. To achieve accurate steering at all angles, we propose a novel subsection control strategy. This method can not only decrease the error from inertia motion but also make the motion more fluent and natural. Experimental results indicate that the mean localization error and actual steering error are within ±7°, ±1° respectively. In a word, it satisfies the requirements of motion control precision of the home service robot.

Acknowledgements

This work is supported by the National Natural Science Foundation of China (61375084), the Key Program of Shandong Provincial Natural Science Foundation, China (No. ZR2015QZ08), and the Fundamental Research Funds of Shandong University (2014JC034).

References
  1. Nakamura K, Nakadai K, Asano F, Yuji H, Hiroshi T. Intelligent sound source localization for dynamic environments. International Conference on Intelligent Robots and Systems, St. Louis, Mo, USA. 2009: 664-669.
  2. Luo R C, Lai C C. Multisensor fusion-based concurrent environment mapping and moving object detection for intelligent service robotics. IEEE Transactions on Industrial Electronics, 2014, 61(8): 4043-4051.
  3. Kim Y, Yoon W C. Generating task-oriented interactions of service robots [J]. IEEE Transactions on Systems Man & Cybernetics Systems, 2014, 44(8): 981-994.
  4. Van d B T, Carette E, Wouters J. Sound source localization using hearing aids with microphones placed behind-the- ear, in-the-canal, and in-the-pinna. International Journal of Audiology, 2011, 50(3): 164-76.
  5. Stachurski J, Netsch L, Cole R. Sound source localization for video surveillance camera. IEEE International Conference on Advanced Video and Signal Based Surveillance. IEEE, 2013: 93- 98.
  6. Pan Z, Xiong Q, Chen K. Landslide rupture plane tracking system based on sound source localization technology. Automation & Instrumentation, 2015(07): 81-84.
  7. Sun L, Li Y. Acoustic emission sound source localization for crack in the pipeline. Chinese Control and Decision Conference, May 2010, Taiyuan, China, 2010: 4298-4301.
  8. Bandi A K, Rizkalla M, Salama P. A novel approach for the detection of gunshot events using sound source localization techniques. IEEE 55th International Midwest Symposium on Circuits and Systems (MWSCAS’12), August 2012, Boise, Idaho, USA. 2012: 494–497.
  9. Yuan X, Song M, Zhou F, Chen Z, Yan Li. A Novel Mittag- Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System. Computational Intelligence and Neuroscience, 2015.
  10. Wang G, Chen H. An importance sampling method for TDOA- based source localization. IEEE Transactions on Wireless Communications, 2011, 10(5): 1560-1568.
  11. Li X, Liu H. A survey of sound localization for robot audition. CAAI Transactions on Intelligent Systems. 2012, 07(1): 9-20.
  12. Li H, Liu H, Gao H, Peng S. Reliable fuzzy control for active suspension systems with actuator delay and fault. IEEE Transactions on Fuzzy Systems. 2012, 20(2): 342-357.

Be the first to comment on "Design and Implementation of Sound Source Localization Method for Robot Motion Control"

Leave a comment

Your email address will not be published.


*


Select Language