Kształtowanie wiązki matrycy mikrofonów dzięki optycznych mikrofonów MEMS

cyberfeed.pl 7 miesięcy temu


This article, written by the experts in optical MEMS microphones from sensiBel expands on the technologies that will undoubtedly contribute to making the next generation of hearables and advanced audio systems. As the Norwegian company enters the marketplace with its first-generation products, the authors explain how 80dB SNR optical MEMS microphones will enhance Microphone Array Beamforming and the different types of beamforming algorithms.

Multi-microphone solutions have been increasingly replacing single microphones to improve acoustic performance and end user experience. The overall acoustic performance goal is to accomplish a high-quality reproduction of the desired signal (voice, music, etc.) without any sound (undesired environmental sounds or strategy noise).

This is challenging due to the wide variety of environments where audio is captured — from an acoustically challenged bare-walled reverberant conference area to a bustling café to a windy outdoor scene. By arranging multiple microphones in specifically chosen microphone arrays and utilizing digital processing algorithms, strategy designers have been able to increase directionality and the strategy level signal-to-noise ratio (SNR) of the audio capture over single microphone implementations.

This has created a superior user experience — being able to focus on a conversation or audio playback without the distraction of added noise. Efforts for further improvements have continued toward a more immersive user listening experience even in the most challenging and complicated environments specified as capturing individual speaking on a busy train, a soft voice at the other end of a large conference room, or even an adaptive 3D spatial sound in AR/VR applications. In any cases, this requires the microphone array to execute as well or better than the users’ own ears if they were there in individual (Figure 1).

Many challenges lie in the way of this goal of reaching a wide frequency range, advanced directivity, and low self-noise microphone array.

Figure 1: Audio capture can be importantly limited by inferior microphones. Higher SNR MEMS microphones supply a superior user experience.

Optical MEMS Microphones
Until now the choice of microphone for arrays has been between the capacitive micro-electromechanical systems (MEMS) microphone and electret condenser microphone (ECM). For mainstream consumer products, capacitive MEMS microphones by far have been the most common choice due to their compact size, manufacturability, device consistency, and on-chip digital output. However, capacitive MEMS microphones have reached the limits of performance, specifically SNR, resulting in the self-noise of the MEMS microphone becoming the limiting origin of the array.

ECMs (in average to large capsule form) have offered a higher SNR alternative, nevertheless they endure from mediocre device-to-device consistency and typically have an analog output requiring a discrete analog-to-digital converter (DAC). In certain applications, specified as investigation or instrumentation grade arrays, strategy designers have implemented cumbersome workarounds to accommodate ECMs: hand soldering all microphone in the array, carrying out first calibration and subsequent periodic calibrations to guarantee matching, and utilizing costly ANCs to capture high-quality digital signals for processing.

For higher volume products these workarounds aren’t practical, so ECM-based arrays aren’t a scalable solution. The perfect microphone for a beamforming array should have all the required attributes—compact size, manufacturability, device consistency, digital output, and advanced SNR.

sensiBel, a Norwegian company developing a fresh generation of MEMS microphones utilizing patented optical technology based on years of investigation at SINTEF, 1 of Europe’s largest independent investigation organizations, has created specified a MEMS microphone. The company’s first optical MEMS microphone delivers 80dBA SNR and 146dB SPL AOP in a MEMS package with a digital output of PDM, I2S, or TDM. In addition to the very-high performance, it can scale to advanced number microphone systems due to its SMT reflow compatibility and consistent and unchangeable device-to-device sensitivity and phase matching. sensiBel’s 24-bit digital output besides facilitates the 132dB dynamic scope of the microphone allowing detection of far-field low level signals and the loudest near-field signals.

The TDM8 format offers a very scalable array solution where up to 8 microphones can share the same digital signal line, for easy integration of large microphone number arrays.

Achieving Directionality

MEMS microphones are typically omnidirectional, meaning that they choice up sounds equally from all directions. Omnidirectional MEMS microphones can easy support directionality at the strategy level by utilizing 2 or more microphones. The spatial separation of the microphones creates different delays between the sound origin and each microphone, which can be processed to make a directional signal toward the sound source. This forms the basis of a beamforming array.

MEMS microphones can individually be made directional by the addition of a second acoustic port in the package. At first look, this may seem like a favorable alternate to utilizing multiple omnidirectional MEMS microphones in a beamforming array, nevertheless directional MEMS microphones have remained somewhat of a niche in the manufacture due to a fewer continual advantages of the omnidirectional MEMS microphone array solution:

The omnidirectional microphone array achieves directionality in the post-processing of the signal, alternatively than purely through the mechanical implementation of a directional MEMS microphone. This provides a more flexible solution, peculiarly for the industrial design. The microphone array beamforming solution can be adapted utilizing variable gains and delays while a directional microphone sets the beamforming pattern based on the proportional dimension of front and back acoustic ports. Higher number microphone arrays can even track the origin of the signal [1].

Wind sound creates a large force gradient across the sensor of a directional MEMS microphone. The microphone array has the ability to detect wind sound and go back to omnidirectional output, which allows much more headroom to avoid overload. Even for non-wind sound situations the microphone array can temporarily return to being omnidirectional for context awareness or non-directional application usage cases. Very advanced SNR omnidirectional MEMS microphones enable the top configurability, flexibility and best performance in beamforming arrays (Figure 2).

Figure 2: Microphone array built with sensiBel optical MEMS microphones.

A polar game is utilized to illustrate the directionality of a microphone or microphone system. Figure 3 shows the game of an omnidirectional consequence (the microphone has the same sensitivity consequence from all directions) on the left, while a directional consequence is shown on the right. The omnidirectional consequence indiscriminately picks up sounds from any direction, in this case both the desired signal at the front and sound sources at the rear. The directional consequence picks up the desired sound at the front but rejects or at least importantly attenuates the sound sources at the rear.

Figure 3: Polar plots showing omnidirectional consequence (a) and directional consequence (b), with desired signal at 0 degrees (on-axis) angle and interfering signals at another angles.

The directional consequence example is simply a somewhat idealistic directional pattern where the front on-axis (0 degree) consequence is just wide adequate to capture the desired signal and there is consistently advanced attenuation elsewhere. In simple beamformers — where a tiny number of microphones and a simple algorithm is deployed, the pattern is little perfect than this and off-axis attenuation won’t be as consistently high, with a more gradual off-axis attenuation expanding to maximum attenuation at an angle where the consequence is null. A uniformly advanced level of rejection and greater directionality can be accomplished utilizing higher order beamforming systems, requiring higher levels of signal processing. The performance of specified beamforming systems is importantly improved utilizing very advanced SNR microphones.

Beamforming Algorithms

Directionality is achieved through signal processing of multiple microphones typically on a digital signal processor (DSP) or system-on-a-chip (SoC). This processing fundamentally pertains to the following beamforming algorithms:

  • Delay-and-Sum
  • Differential
  • Minimum-Variance Distortionless consequence (MVDR)

The first 2 algorithms, Delay-and-Sum and Differential, are examples of simple algorithms that have a limited set of configurable parameters. They besides have simple computation methods with straightforward implementation [2].

Other more complex algorithms take into account the environmental conditions to make more optimal beamforming solutions for circumstantial applications and are typically referred to as adaptive beamformers.

The 3rd algorithm listed, MVDR, is an example of an adaptive beamformer that takes crucial advantage of an 80dB SNR microphone. another beamformers include linear constraint minimum variance (LCMV), generalized sidelobe canceler (GSC), or Frost beamformer [2]. We will focus on the listed 3 algorithms.

Delay-And-Sum — a Simple Algorithm with Limited Directivity

Delay-and-Sum beamforming — sometimes besides called phased array processing — is where 2 or more microphone outputs are assigned individual delays, and then summed together to steer the beam toward a desired signal of interest (on-axis response). This is typically implemented in a broadside linear array where the desired signal is perpendicular to the array (0-degree position) as shown in Figure 4.

Figure 4: Delay-and-Sum broadside array direction.

Signals at the front (0 degrees) and rear (180 degrees) of the array are phase aligned and gained, while signals to the side are attenuated since the signals are not in phase. The addition of a hold steers the on-axis consequence distant from the line perpendicular to the microphones, and the hold can even be increased to the degree that it can rotate the on-axis consequence in line with the array to form a Delay-and-Sum endfire.

The advantage of Delay-and-Sum beamforming is that the on-axis frequency consequence is flat and does not require equalization, nevertheless the off-axis attenuation is very frequency dependent and can only be optimized at circumstantial frequencies (where the half wavelength corresponds to the microphone spacing).

Above that frequency, spatial aliasing occur, which is where lobes are created in addition to the on-axis consequence and the rejection can become unpredictable, while below that frequency there will be small attenuation of the signal [1].

This may be improved by adding more microphones to the array, but the increased size introduces aliasing at even lower frequencies. The directional consequence of Delay-and-Sum is besides symmetric, so there will be no attenuation 180 degrees to the on-axis.

Differential — More Directive but EQ Required

Differential beamforming is where the difference in the signals of 2 or more microphones is utilized to form directionality. It is typically arranged in an endfire formation in the direction of the desired signal to make a spatial difference of each microphone comparative to the origin (see Figure 5). In a two-microphone differential endfire array the rear microphone signal is subtracted from the front microphone signal.

Figure 5: Differential endfire array.

This has the effect of rejecting sounds from the sides and behind the array. A cardioid pattern (shown in Figure 6) can be formed by adding a hold to the “rear” microphone signal, which corresponds to acoustic hold between the microphones for their given spacing. This is simply a classical pickup pattern utilized to accomplish very advanced off-axis rejection, over 30dB straight at the rear 180-degree position and accomplish a average 6dB rejection at the sides,
90 and 270 degrees. The side rejection can be increased to 12dB and 18dB by utilizing second- and third-order implementations, respectively.

Figure 6: Directional responses for a two-element microphone array at 500Hz and 2kHz. At low frequencies, the differential array retains its directivity.

This advanced off-axis rejection is simply a immense advantage to differential beamforming. It does so without adding any spatial aliasing below the first null of the signal giving a predictable frequency response. The frequency consequence does have a first-order high-pass filter characteristic of 6dB/octave, which needs to be equalized. This introduces a trade-off between signal bandwidth and White sound Gain, which is discussed in the differential beamforming section of this article.

By utilizing an 80dB SNR microphone little sound is introduced in this trade-off allowing a wide bandwidth, flat frequency consequence along with advanced off-axis rejection and better directivity at lower frequencies.

Considering a tiny two-element conferencing microphone array with either architecture, the Delay-and-Sum broadside array is implemented with a 50mm microphone spacing, and the differential endfire array is implemented with a 21mm spacing.

From the polar patterns (Figure 6), we can see the differences in directionality where the differential architecture performs more favorably, peculiarly at low frequencies. The frequency consequence (Figure 7) shows the advantage of Delay-and-Sum, giving a naturally flat response. But the high-pass filter nature of differential beamforming can be equalized, peculiarly erstwhile utilizing an 80dB SNR microphone.

Figure 7: Uncompensated frequency consequence of the microphone arrays, with on-axis sound incidence

A tiny two-element conferencing microphone array with Delay-and-Sum or differential can be easy implemented in many real-world applications. Delay-and-Sum can be implemented utilizing the communication microphones on either side of a laptop webcam and form an effective broadside array pointing at the user predominantly straight in front of the camera, while providing attenuation to environmental sounds to the side.

A differential endfire array can be implemented inside a USB podcast microphone, where the user speaks to the front side and the advanced directivity can reject reverberations and another area sound to creating a smooth professional audio capture. It can besides easy adapt the beamforming to create

figure 8 pattern for “interview mode” erstwhile the hold is removed (Figure 8).

Figure 8: (a) Application example of Delay-and-Sum broadside in a laptop (b) differential endfire in a USB podcast microphone.

The Value of advanced SNR Microphones for Delay-And-Sum Beamforming
Delay-and-Sum processing is more favorable than differential beamforming with respect to the on-axis output signal since it produces a flat consequence and a net increase in strategy SNR. These benefits trade off with directivity, which is much lower than differential processing and off-axis rejection levels are more dependent on frequency and precise angle of arrival. The SNR improvement is attractive since it is easy realized—the natural output of a two-microphone Delay-and-Sum beamformer results in a 3dB increase of the strategy SNR due to the net effect of adding coherent signals, +6dB, minus the incoherent sound increase, +3dB. This +3dB improvement continues for all doubling in number of microphones used.

This effect frequently means Delay-and-Sum is employed as a pure microphone SNR increase for the system. An 80dB SNR MEMS microphone already has a greater than 10dB SNR advantage over another capacitive MEMS microphones, meaning that little microphones — a origin of 8x little — can be utilized to accomplish the same strategy level SNR while reducing strategy complexity and processing. Alternatively, the same number of microphones may be retained, peculiarly for composite or nested arrays, which keep on-axis beamwidth across frequency, in which case the absolute SNR of the strategy will be importantly higher utilizing 80dB SNR MEMS microphones (Figure 9).

Figure 9: Delay-and-Sum strategy level SNR vs. microphone number for different SNR mics used.

The Value of advanced SNR Microphones in Differential Beamforming
Differential beamforming by nature generates a high-pass filter consequence due to the diminishing differences in sound force between each microphone as we approach low frequencies as shown earlier in Figure 7.

This high-pass consequence needs to be equalized with a low-pass or low-shelf filter to reconstruct a flat response. In doing so both the consequence and self-noise are gained equally, but unlike the consequence the self-noise is not starting from attenuated state, so the net effect of differential processing is an increase in self-noise or what is commonly referred to as white sound gain.

This is peculiarly evident at low frequencies where the gain is the highest. Additionally, even the frequencies without gain see an increase due to the subtraction of uncorrelated sound sources, which for 2 microphones amounts to +3dB. Avoiding added sound after EQ means either reducing the low-frequency attenuation or reducing the starting self-noise of the strategy (microphone self-noise), as shown in Figure 10.

Figure 10: Input referred sound spectrum for 70dBA and 80dBA SNR microphone in omni and post first-order differential beamforming with EQ applied.

Differential Beamforming Low-Frequency Attenuation vs. Directionality and Bandwidth
The amount of low-frequency attenuation is determined by 2 factors — microphone spacing and the order of beamforming. Increased microphone spacing will decrease low-frequency attenuation of the signal, since adjacent microphones will see a greater difference in sound pressure. However, this comes at a cost — the frequency where on-axis signals cancel each another out (null frequency) will be lower. This occurs where the distance between microphones is the same as the half wavelength of that frequency, for 21mm spacing this would be about 8kHz. Since it is very hard to equalize the series of nulls and maxima that happen above that frequency, the first null typically defines the usable bandwidth of the signal.

Maxima frequencies above the first null besides endure from spatial aliasing so the cardioid pattern is lost. expanding the distance to 42mm for example increases the level of signal at 100Hz by 10dB but decreases the usable bandwidth to 4kHz, as shown in Figure 11.

Figure 11: Uncompensated frequency consequence of each microphone spacing for first-order differential endfire beamformer.

A compact microphone array supporting 24kHz bandwidth can be implemented utilizing an 80dB SNR microphone. Table 1 shows the required gain to each spacing for first-order differential beamformers. This gain correlates with an SNR request for each spacing. Only an 80dB SNR microphone can meet requirements for all microphone spacings, in particular, gathering the request for 7mm spacing, which produces full audio bandwidth. expanding the order of beamforming is desirable to accomplish higher directionality. By creating a second- or third-order array the cardioid pattern becomes tighter, expanding rejection from 6dB at 90 degrees and 270 degrees to 12dB and 18dB, respectively (Figure 12).

Table 1: Minimum required microphone SNR for each microphone spacing and bandwidth.
Figure 12: Directional consequence of a first-, second-, and third-order differential array with 21mm microphone spacing at 1kHz.

This besides comes at a cost since the order of the filtering besides increases with the slope of the high-pass consequence expanding from 6dB/octave (second order) to 18dB/octave (third order) as shown in Figure 13. This exacerbates the white sound gain mentioned utilizing a first-order system, and the sensitivity to microphone self-noise is much greater. Superior strategy level sound performance in a higher-order array is realized utilizing an 80dB SNR MEMS microphone.

Figure 13: Uncompensated frequency consequence for each order of differential endfire beamformer.

Directivity Index — Evaluating the Overall Array Performance
A polar game is helpful to analyse the array consequence for a single frequency. However, to analyse how the array performs over the full frequency scope of interest, it is better to reduce the directivity to a single number called the Directivity Index (DI). This number can be calculated for each frequency to realize how directional the array is at various frequencies.

Figure 14 shows the DI for the 2 two-element microphone arrays under consideration. The Delay-and-Sum array has no directivity at low frequencies, as can besides be seen in the 500Hz polar game from Figure 6. The directivity then increases for advanced frequencies, where the wavelength of the sound is comparable or smaller than the microphone spacing. A maximum directivity occurs at 4kHz where the spacing is somewhat more than half a wavelength.

Figure 14: Directivity vs. frequency for first-order Delay-and-Sum broadside and differential endfire arrays.

The differential array performs much better at low frequencies, with a DI of 4.8dB. In theory, the directivity is maintained all the way down to zero Hertz, but in practice the EQ cannot be designed to have infinite amplification at very low frequencies. At advanced frequencies, the spatial aliasing occurs, and the array loses its directivity. The DI becomes negative at those frequencies, indicating the array is performing worse than even an omnidirectional microphone, since the off-axis consequence is higher than on-axis.

In Figure 15 the effect of reducing the microphone spacing in the differential array is shown. It mirrors the trend shown in Figure 11’s uncompensated frequency response, where the usable bandwidth increases erstwhile the microphone spacing is decreased. Above the first null the directionality goes negative respective times inside the audio band for the 21mm and 42mm examples. However, with a 7mm spacing 1 can get advanced directivity all the way up to 10kHz and stays affirmative up to 18kHz. Due to the required EQ, this 7mm spacing is only achievable with an 80dB SNR microphone.

Figure 15: Directivity vs. frequency for first-order differential endfire at different microphone spacing.

MVDR — Example of an Adaptive and Data-Driven Beamforming Algorithm
While the Delay-and-Sum and differential beamformers are easy to implement with simple signal processing building blocks, they do not supply an optimum solution in many cases. The application might require different beamformers depending on the acoustic environment. This can be to steer the beam in different directions and suppress interferences from certain another directions. Adaptive, data-driven beamformers aim to solve these problems by optimizing the beamformer utilizing the signals coming from the microphones.

MVDR is specified an algorithm where individual amplification and hold are adapted for each component in the array to preserve the sensitivity in the direction of the desired signal while maximizing the attenuation of another interfering signals. In fact, the algorithm attempts to minimize the full array output signal, which includes acoustic interference, noise, and microphone self-noise.

The individual amplification and hold are typically applied for each frequency, which results in a digital filter for each microphone signal, H in Figure 16. This can be implemented as a finite impulse consequence (FIR) filter or in frequency-domain processing.

Figure 16’s example of a five-microphone circular array could be utilized in a smart speaker. The advantage of specified a geometry is that the array will have equal performance in 360 degrees, compared to an endfire or broadside array that is targeted toward a peculiar direction.

Figure 16: (a) Example of a multi-microphone smart talker and (b) strategy implementation of a five-microphone circular array with individual filters (represented by transfer functions Hn) applied for each microphone.

The Value of advanced SNR Microphones for MVDR Beamforming
In MVDR beamforming the direction of both the desired signal and the interfering sound request to be known upfront or established during operation utilizing signal processing and statistics. The advantage of this method is that advanced attenuation nulls can be steered in the direction(s) of interference(s), and given the correct constraints, the algorithm will make an optimum directional response.

The output of the processing has the same traits as Delay-and-Sum — a flat frequency consequence in the direction of interest and a net increase in SNR, but with a higher directivity. The direction of the desired signal is typically identified with a separate Direction-of-Arrival algorithm and then the spatial properties of the sound sources are identified. This information is then utilized to optimize the filters to make nulls in the direction of the sound sources, as well as suppress diffuse reverberation sound in the area (Figure 17).

Figure 17: Example of a directional consequence for a MVDR beamformer that is set up to have 0dB consequence in the 0° direction, and attenuate signals from 130° and 270° degrees.

Adaptive, data-driven beamformers are susceptible to errors in the Direction-of-Arrival estimation, as well as inadvertent subtraction of desired signal reflections. Thus, applicable plan of the algorithms must consider a certain amount of microphone self-noise.

An 80dB SNR microphone helps — the lower sound level means the actual environmental sound levels and point sources can be more accurately detected. In addition, the MVDR algorithm can work more aggressively to accomplish a higher directivity erstwhile utilized with an 80dB SNR microphone. An example of this has been simulated in Figure 18 comparing an 80dB SNR MEMS microphone to a 70dB MEMS microphone.

Figure 18: MVDR directional consequence at 500Hz with a fixed environmental sound level, but different sensor sound levels. The array geometry is simply a two-element endfire with 21mm spacing.

In Figure 19, the DI is shown versus frequency for the 2 microphones with different SNR. In the frequency scope of 200Hz to 2kHz (which is in the center of the frequency scope of the human voice), there is up to 3.5dB better directivity due to the improved microphone SNR. This means that more reverberation and interfering signals will be removed, especially in the rear direction of the array.

Figure 19: Directivity Index with a fixed ratio between the environmental sound and the sensor self-noise.

The benefit of MVDR utilizing an 80dB SNR microphone is increased directivity without introducing perceivable microphone self-noise. A advanced SNR MVDR beamformer can be applied to a wide variety of dynamic applications and get crucial rejection to sound sources. It provides a higher directionality alternate to Delay-and-Sum and a more adaptive alternate to differential arrays.

Beyond Beamforming — Further strategy Processing

Beamforming is 1 part of a full optimized audio capture system, it is frequently complemented by acoustic echo cancellation (AEC), sound suppression and adaptive interference cancellation [3]. Beamforming is 1 of the most crucial parts — by having a advanced performance beamformer the subsequent processing blocks don’t gotta work as hard. For example with AEC, a highly directive beamformer before the AEC block can reduce the amount of signal it has to cancel, allowing it to do a better occupation and avoid aggressive processing, which can consequence in half duplex communication or failure of double talk in a conferencing system.

For sound suppression high-performance beamforming reduces amount of the interfering sound or strategy self-noise in the incoming signal, avoiding usage of overly aggressive processing like sound gating, which can make voice communications sound very unnatural [4].

Increasingly, Artificial Intelligence (AI) is being utilized in audio processing, in peculiar device Learning (ML)-optimized algorithms for beamforming and sound suppression can increase their performance. An 80dB SNR microphone enables better quality of data for the ML optimization since there will be more margin between the self-noise and the environmental noise.

Conclusion

Using an 80dB SNR MEMS microphone in 3 common beamforming algorithms, strategy performance can be importantly improved with higher directivity, higher bandwidth, and higher SNR. Delay-and-Sum beamformers accomplish higher strategy SNR with little microphones avoiding processing complications and artifacts erstwhile utilizing a higher number of microphones in that system.

Differential beamforming can implement tighter spacing between microphones, which increases the usable bandwidth of the signal and with little white sound gain, meaning higher strategy SNR. MVDR and another algorithms can better optimize the beamforming parameters and make more directionality for a higher frequency scope (Table 2).

Table 2: Summary of each beamforming algorithm and the benefit of an 80dB SNR microphone.

System processing features specified as AEC, sound suppression, and adaptive interference cancelling benefit from a beamformed signal since it provides a better margin between the desired signal and the interference. This allows a better audio signal without excessively aggressive processing, which can lead to an unnatural sounding capture.

Combining superior microphone array beamforming with these advanced strategy processing features allows clearer voice calls, superior audio recordings, more productive conference calls and more immersive 3D AR/VR audio experiences. sensiBel’s 80dB SNR SBM100 series brings this level of performance to MEMS microphones and to beamforming arrays for the first time. For more information, visit sensibel.com. aX

References

[1] M. Suvanto, The MEMS Microphone Book, Kangasala, Finland: Mosomic Oy, 2021.

[2] “Beamforming Overview,” MathWorks, MATLAB. (n.d.), www.mathworks.com/help/phased/ug/beamforming-concepts.html

[3] “Qualcomm QCS400 Smart Speaker/Sound Bar,” DSP Concepts,

https://w.dspconcepts.com/reference-designs/qualcomm-qcs400-smart-speaker-soundbar

[4] “Acoustic Echo Cancellation,” Texas Instruments Video Library. Texas Instruments Precision Labs Audio Series, 2022, www.ti.com/video/6308400085112

About the Authors

Michael Tuttle is the Applications Engineering lead at sensiBel. He has 20 years manufacture experience with 15 of them specializing in MEMS microphones with roles in quality, production development, and applications. He has worked for many of the manufacture technology leaders in Analog Devices, Invensense, Vesper, TDK, and now sensiBel. He has a master’s degree in Microelectronic Engineering from University College Cork, Ireland, and has co-authored patents on MEMS microphone technology.

Jakob Vennerød is the co-founder and Head of Product improvement at the Norwegian MEMS startup, sensiBel. Previously he worked on respective investigation projects in audio and acoustics at SINTEF. He has extended experience in acoustics, electronics, and signal processing, peculiarly in audio technology. He has a master’s degree in Electronic Engineering and Acoustics at the Norwegian University of discipline and Technology, and has coauthored respective patents related to microphone technology.

This article was originally published in audioXpress, April 2024



Source link

Idź do oryginalnego materiału