My primary passion for the last six years, which is AI/ML, and my primary passion for the first two decades of my career, which was digital signal processing (DSP), have finally found a common point of intersection in the form of Fourier Analysis Networks (FAN). I have discussed in the past (I wrote a post on Komogorov-Arnold Network or KAN about six months ago) that as the input functions increase in complexity, the "universal approximation" foundation of multi-layer neural networks start hitting their limits. Result is too many hidden layers and somewhat unwieldy models. The Komogorov-Arnold Network, based on the Komogorov Representation, is a different approach, that can represent any continuous multi-variate function as a summation of multiple continuous univariate functions. This was quite a breakthrough, and it will continue to serve this field well. One aspect that is so far neglected, which is actually one of the primary objectives in DSP, is to discover, and utilize, the periodicity of data. One of the key benefits is that if there is a periodicity, a time domain input can be represented in a more compact way in the frequency domain. To do this, we use Fourier Analysis, which decomposes a signal into a sum of sinusoidal components, which are fundamental to understanding the periodicity and frequency components of the input. A Fourier Analysis Network (FAN) is a type of neural network that uses the principles of Fourier analysis to model, analyze, and process signals or data. The FANs incorporate sinusoidal functions into their architecture to capture periodic or frequency-domain features of data. Such networks can encode data in the frequency domain, which is particularly useful in scenarios where periodicity is present (such as audio signals and image textures). There are many types of FANs! Here are a few examples. The Fourier Neural Operator (FNO) uses the Fourier Transform to learn mappings between functional spaces, and it is very useful n solving partial differential equations. The Fourier Feature Networks use Fourier feature embeddings to transform input data into a high-dimensional space using sinusoidal functions, and Neural Radiance Fields (NeRF) is a useful application. Finally, Spectral Neural Networks operate entirely in the frequency domain instead of time or spatial domain, and can be used for image compression, denoising and other applications. We like to learn new things in our area of work all the time. But if a "ghost from the past" becomes useful in a new and different way, somehow that becomes even more interesting!
Innovations In Signal Processing Techniques
Explore top LinkedIn content from expert professionals.
Summary
Innovations in signal processing techniques are transforming how we analyze, interpret, and use complex signals in fields like communications, energy, and safety. Signal processing is the science of converting, analyzing, and extracting useful information from raw data or signals, such as sound, electrical currents, or radiation, by using mathematical and computational methods.
- Embrace AI integration: Explore how machine learning and neural networks can help identify patterns, recover hidden information, and improve decision-making from intricate signals.
- Explore hardware advances: Consider using hybrid FPGA devices that combine programmable logic with AI engines to achieve faster, more flexible signal analysis in demanding environments.
- Utilize advanced algorithms: Implement techniques like Fourier analysis, variational mode decomposition, and deep learning to make signal processing more accurate, especially for detecting anomalies or periodic patterns.
-
-
Real-Time Peak Detection System on FPGA | DRDO Internship As part of my DRDO internship, I designed and implemented an adaptive peak detection algorithm for real-time signal analysis on FPGA. The goal was to detect transient peaks in noisy signals with minimal latency and high reliability. 🧠 Algorithm Overview: The system maintains a sliding window of recent signal samples. It continuously calculates the mean and standard deviation over this window to adapt to signal baseline shifts. A new sample is compared against a dynamic threshold, defined as a multiple of the standard deviation above the mean. When the signal exceeds this threshold, it is marked as part of a peak region. A finite state machine (FSM) tracks entry into and exit from peak regions, using a hysteresis margin to ensure stable detection and avoid false triggers. Upon exit from a peak region, the system registers a valid peak along with its location, amplitude, and width. 🛠️ The design is optimized for FPGA implementation with fixed-point arithmetic, ensuring resource efficiency and real-time operation. It is suitable for applications like: Anomaly detection in sensor signals Vibration/event monitoring Embedded signal analytics This was a great opportunity to apply statistical signal processing in hardware and optimize it for defense-grade embedded systems. #FPGA #SignalProcessing #Verilog #PeakDetection #RealTimeSystems #AdaptiveThreshold #HardwareDesign #DRDO #DigitalSignalProcessing #VLSI
-
The growing integration of renewable energy into microgrids has raised concerns about islanding, an unplanned state where distributed generation (DG) continues to power a local grid despite losing connection to the main utility. This poses significant safety risks to personnel and operational hazards to the grid, as unsynchronized reconnection can cause substantial equipment damage due to large inrush currents. Therefore, developing highly accurate, fast, and cost-effective IDS (Islanding Detection Systems) for microgrids is crucial. The IDS is critical for solar PV installations as to: • Ensure safety by preventing energized islands during maintenance. • Protect equipment from damage due to unsynchronized operation. • Maintain grid stability and comply with engineering standards as IEEE-1547, which mandate rapid disconnection (e.g., within 2 seconds) if an island forms. IDS methods are categorized as local, remote, and signal processing. The local method is further classified as passive and active. Local active methods (e.g., AFD) offer fast and accurate detection but can degrade power quality, while local passive methods (e.g., O/U F&V) avoid this but have a large Non-Detection Zone (NDZ). Remote methods (e.g., PLC) provide fast detection and a small NDZ, but are expensive and complex. Signal processing methods, such as Fourier-/Wavelet-Transform (FT/WT), and Empirical Mode Decomposition (EMD), aim to reduce the NDZ, but can suffer from aliasing. Their effectiveness in supervised learning prediction accuracy may degrade. To address the drawbacks of active and passive methods, a hybrid Intelligent IDS called 'AVMD-TEO-MPE-1D-CNN' is proposed by the authors of [1]. It's based on a parameter-optimized multiscale variational mode decomposition (VMD) and a deep learning hybrid approach. First, the proposed Adaptive-VMD (AVMD) strategy improves the selection of the optimal mode number and penalty term in VMD by leveraging the relative MPE (multi-scale permutation entropy) between the original signal and the IMFs (intrinsic mode functions). Subsequently, the TEO (Teager Energy Operator) is used to further extract sequential features to track the instantaneous energy of the IMFs. Finally, the AVMD-TEO-MPE-based features in the intelligent IDS are used to train a 1D-CNN (one-dimensional convolutional neural network) as a deep learning binary classifier to distinguish between islanding and non-islanding states. The proposed 'AVMD-TEO-MPE-1D-CNN' method demonstrates 100% accuracy in simulation results for distinguishing islanding from non-islanding events across various conditions, with a maximum detection time of 46.402 ms. It also exhibits noise resistance and outperforms existing methods in comparative analyses. The link to paper [1] is shared in the comments. They developed their simulation using #Matlab, #Simulink, and #Simscape. However, one can use VMD, MPE, and 1D-CNN, available in Python, from various GitHub repository APIs.
-
Our work, "Programmable Circuits for Analog Matrix Computations," is available on Nature Communications. Matrix operations are at the core of signal processing in radiofrequency and microwave networks. While analog matrix computations can dramatically speed up signal processing in multiport networks, they can also reduce the size, weight, and power of radiofrequency and microwave devices by partially eliminating the need for power-hungry electronics. These computing devices exploit fundamental properties of electromagnetic waves, enabling parallel signal processing at the speed of light. Here, we propose and demonstrate a microwave-integrated circuit capable of implementing universal unitary matrix transformations. The proposed device operates by alternating non-reconfigurable and reconfigurable layers of basic RF components, comprising cascaded power dividers and programmable phase elements, respectively. The controllable multipath interference through conjunctive use of linear wave mixing with active phase control enables creating complex transformations in this device. We experimentally demonstrate this device concept using a four-port integrated circuit operating across the frequency range of 1.5–3.0 GHz and at hundreds of micro-Watt power levels. The proposed device can pave the way for universal analog radiofrequency and microwave processors and preprocessors with programmable functionalities for multipurpose applications in advanced communications and radar systems. Congratulations to the great team: Rasool Keshavarz, Kevin Zelaya, and Negin Shariati . Link: https://lnkd.in/e8Wm5kQV
-
Data centers constitute the foundational infrastructure underpinning the rapid expansion of Artificial Intelligence (AI) and modern telecommunications. AI applications depend on the immense computing power and large-scale storage provided by data centers to train complex models and perform real-time inference. Concurrently, in telecommunications, data centers are central to enabling 5G technologies, including Network Functions Virtualization (NFV) and the Internet of Things (IoT), all of which demand flexible and scalable computational resources. The evolution towards edge computing, which places data center resources closer to end-users, is particularly critical for latency-sensitive applications such as autonomous systems and smart manufacturing. The effective operation of these distributed data center architectures is contingent upon high-speed, reliable, and energy-efficient communication links. Optical fiber communication, and specifically coherent transmission, is the key technology capable of delivering the necessary bandwidth and low latency. However, achieving the high data rates required for these applications is challenged by signal impairments within the optical transmitter chain. These impairments include both linear and nonlinear distortions introduced by critical components, namely the Digital-to-Analog Converter (DAC), the driver amplifier, and the Mach-Zehnder Modulator (MZM). These distortions force the system to operate inefficiently, increasing energy consumption and limiting the use of high-order modulation formats, thereby constraining overall system capacity. Digital Predistortion (DPD) is a established technique for compensating such distortions by applying an inverse distortion to the input signal. Implementing DPD in optical systems, however, presents unique challenges due to the extremely high signal bandwidths, which necessitate high-speed data converters and create significant memory effects in the components. The computational complexity of traditional DPD methods can therefore become prohibitive, potentially increasing power consumption and undermining the energy efficiency goals. This article https://lnkd.in/d7Nhgzyq presents a frequency-domain DPD approach specifically designed for coherent optical transmitters to address these challenges. Frequency-domain processing offers a solution by enabling the efficient compensation of distortions with strong memory effects. By leveraging fast convolution algorithms based on the Fast Fourier Transform (FFT), the proposed method significantly reduces computational complexity compared to time-domain approaches. Furthermore, it mitigates the need for excessive oversampling, contributing to improved overall energy efficiency of the transmitter.
-
Cohere Technologies is a pioneer in spectrum management but most folks are unaware of its use of prediction and #ai in its software. As the telco industry vets out solid use cases for applying AI, Cohere is implementing it today. The integration of AI with USM marks a major leap forward in wireless channel modeling. By harnessing the vast data generated by USM—including uplink and downlink channel measurements, multipath components, delay spreads, and interference patterns—AI-powered models can more accurately capture the complexities of real-world wireless environments. Unlike traditional statistical methods, these models dynamically incorporate temporal and spatial dependencies, environmental factors, and real-time network conditions. Cohere is redefining channel estimation by shifting from traditional statistical methods to an innovative approach that models channels rather than frequencies. This approach integrates temporal and spatial dependencies, environmental factors, and real-time network conditions, enabling more precise tuning of RAN parameters such as modulation schemes, coding rates, and power allocation. Cohere’s method calculates radio channel requirements based on user device range and velocity, as well as signal propagation from the cell site to the device. Instead of relying on time and frequency, it leverages distance (measured in signal delay) and speed (measured in Doppler shift) to generate a channel map that remains valid for up to 50 milliseconds. This significantly reduces processing loads on base stations, which would otherwise need to frequently re-estimate channel conditions. As a result, channels remain usable for longer, effectively mitigating the effects of channel aging. By employing the delay-Doppler model, Cohere maintains a real-time, comprehensive view of the wireless channel, optimizing network performance and enhancing user experience. This approach maps all energy, interference, and reflectors, creating a detailed representation of both the physical and wireless environments. With a more precise understanding of signal propagation in a given setting, beamforming can be optimized for individual user equipment (UEs), and spectrum utilization can be maximized. Unlike conventional methods that require separate time or frequency slots for each user, Cohere’s approach enables multiple users to share the same time and frequency slots, improving spectral efficiency and overall network capacity. Check out Appledore Research report on Cohere and Robert Curran analysis of the benefits of the technology. https://lnkd.in/esAiHn8C #5G #spectrum #network optimization #telco Ronny Haraldsvik Raymond Dolan Art King
-
Hybrid FPGA devices are unlocking unprecedented capabilities for radar and signal processing applications. These advancements offer a compelling path to higher performance, lower power consumption, and greater flexibility in mission-critical systems. 🔍 Why Hybrid FPGAs? Traditional FPGAs have long been used in radar and signal processing, but hybrid architectures—which combine FPGA fabric with integrated hard cores, CPUs, and AI accelerators—are pushing performance boundaries even further. 💡 Key Advantages of Hybrid FPGAs 1️⃣ Higher Computational Efficiency Integrated DSP blocks and AI engines accelerate complex signal processing, reducing latency. Parallelism enables real-time radar data analysis for enhanced target detection and tracking. 2️⃣ Lower Power, Higher Integration On-chip hard cores and CPU cores offload tasks, reducing power consumption and board space. Embedded RF processing reduces reliance on external components, improving system efficiency. 3️⃣ Improved Flexibility & Reconfigurability Adaptive hardware allows on-the-fly reconfiguration to support multiple radar modes (SAR, phased array, synthetic aperture) in a single device. Future-proof designs can be updated via software-defined radio (SDR) techniques. 4️⃣ Faster Development & Deployment Built-in AI acceleration enables advanced signal processing applications like interference mitigation, sensor fusion, and real-time anomaly detection. Streamlined toolchains from vendors like Xilinx (AMD Versal), Intel Agilex, and Achronix Speedster simplify development. 🚀 The Future of Radar & Signal Processing Hybrid FPGAs are revolutionizing industries such as defense, aerospace, automotive, and telecommunications, delivering unparalleled speed, efficiency, and adaptability. As radar and signal processing applications grow more complex, these devices will be at the forefront of innovation. 🔑 Takeaway: If you’re developing next-generation radar, electronic warfare, or high-speed communications systems, now is the time to explore hybrid FPGA architectures. The combination of programmability, AI acceleration, and custom hardware blocks is reshaping what’s possible. Are hybrid FPGAs part of your roadmap?
-
𝗥𝗲𝗱𝘂𝗰𝗶𝗻𝗴 𝗦𝗽𝘂𝗿𝘀 𝗶𝗻 𝗙𝗣𝗚𝗔 𝗗𝗶𝗿𝗲𝗰𝘁 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗦𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘀 (𝗗𝗗𝗦): Key Techniques for Cleaner Signal Generation In communication systems, many DSP algorithms rely on complex sinusoids (sine and cosine) of specific angles to perform crucial tasks. Direct Digital Synthesis (DDS) has become a go-to solution for generating these sinusoids, offering several key benefits. DDS allows for arbitrarily fine tuning resolution to meet nearly any design specification, the ability to control both phase and frequency in a single sample period, which is ideal for phase modulation. It also offers robust implementation using integer arithmetic. DDS is stable even with finite-length control words, eliminating the need for automatic gain control, and preserves phase continuity, which is especially valuable for tunable waveform generators. However, a challenge in DDS is reducing spurious signals (spurs) that can degrade performance. These spurious signals are influenced by factors like the input frequency word, phase accumulator size, and the number of truncated phase bits. To combat this, three primary techniques have been developed: Error Feedforward, Dithering, and Error Feedback. Error Feedforward, patented by fred harris (US6333649B1), leverages the known phase error to adjust the LUT output for phase compensation. Dithering, a common technique, introduces a small error to the phase accumulator, which helps spread energy more evenly across the spectrum, minimizing spectral line concentration. Error Feedback subtracts the quantization error before it happens using a linear predictor, leading to noise-shaping behavior and further reducing spurs. To illustrate these methods, the attached figure shows a comparison of two DDS architectures—one using error feedforward and the other using error feedback—simulated in MathWorks MATLAB with an 8-bit LUT and a 100MHz sampling frequency. While error feedforward increases Spurious-Free Dynamic Range (SFDR), it requires two multipliers, whereas error feedback avoids multipliers but results in a lower SFDR. For anyone interested in diving deeper into FPGA-based DDS with reduced spurious spectral lines, I highly recommend checking out the following articles: Ultra Low Phase Noise DDS https://lnkd.in/d8Fmvy5h Direct Digital Synthesis: A Tool For Periodic Wave Generation (Part 2) https://lnkd.in/dukBAtJr #FPGA #DSP #DDS #Matlab
-
Researchers at Wuhan University, #Wuhan, #China have developed a method to use bistatic HF #radar with a compact antenna for ship and aircraft tracking. Here are the key points summarizing the high-speed target tracking method for compact HF radar: 1. The method addresses challenges in tracking high-speed targets that cause significant range migration (RM) and Doppler frequency migration (DFM) with compact HF radar. 2. It uses the generalized Radon Fourier transform (GRFT) algorithm to detect targets and estimate motion parameters, overcoming issues caused by RM and DFM. 3. The method leverages the zero-mean distribution property of direction of arrival (DOA) estimation errors to reduce their impact on tracking accuracy. 4. Key steps include: - Estimating original track using GRFT and MUSIC algorithms - Predicting multiple tracks based on the original track - Screening predicted tracks based on average headings - Fusing screened tracks to generate the final track 5. Numerical simulations and field experiments demonstrated the method's effectiveness, especially for targets with large DOA estimation errors. 6. Compared to conventional CV-Kalman and CA-Kalman filters, the proposed method showed superior performance in reducing tracking errors. 7. The method works for both circular and rectilinear target motions. 8. Limitations include the requirement for constant DOA trend over a coherent processing interval (CPI) and potential impacts from sea clutter. 9. Future work may focus on reducing CPI length, joint processing of multiple CPIs, and testing in coastal environments with sea clutter. In summary, this method improves high-speed target tracking for compact HF radar by addressing RM and DFM issues and mitigating DOA estimation errors through innovative signal processing techniques. With modifications, this method has the potential to be extended to using #OTHR #ionobounce radar to track #hypersonic targets.
-
Excited to share the latest episode of the 6 Minutes Paper Talk Podcast! 🎙️ In this episode, I delve into the paper "A New Random Variable Normalizing Transformation With Application to the GLRT" by the legendary Steven Kay and Yazan Rawashdeh. In this paper they explore innovative methods for converting random variables into approximate standard normals, enhancing the performance of the GLRT in model-order selection and signal detection. 🔍 Key Highlights: - Theorem and Definitions: Introduction to CGF and LT. - Chi-Squared Example: Transformation of a chi-squared random variable to a standard normal. - PDF Comparison: True vs. asymptotic PDFs. - Practical Applications: Normalizing GLRT statistics. Don't miss this deep dive into advanced statistical methods for signal processing! It's also available in Spotify: link in the first comment. PS: If the content brought you value, please share it with someone who can benefit from it. If not, please tell me how I can improve my content. #AI #SignalProcessing #DeepLearning #Research #Podcast #GLRT #StatisticalMethods
Episode 4: A New Random Variable Normalizing Transformation With Application to the GLRT
https://www.youtube.com/