LoRa modulation is a widely used technology known for its long-range transmission capabilities, making it ideal for applications with low data rate requirements, such as IoT-enabled sensor networks. However, its inherent low data rate poses a challenge for applications that require higher throughput, such as video surveillance and disaster monitoring, where large image files must be transmitted over long distances in areas with limited communication infrastructure. In this paper, we introduce the LoRa Resource Allocation (LRA) algorithm, designed to address these limitations by enabling parallel transmissions, thereby reducing the total transmission time () and increasing the bit rate (BR). The LRA algorithm leverages the quasi-orthogonality of LoRa's Spreading Factors (SFs) and employs specially designed end devices equipped with dual LoRa transceivers, each operating on a distinct SF. For experimental analysis we choose an image transmission application and investigate various parameter combinations affecting to optimize interference, BR, and image quality. Experimental results show that our proposed algorithm reduces by 42.36% and 19.98% for SF combinations of seven and eight, and eight and nine, respectively. In terms of BR, we observe improvements of 73.5% and 24.97% for these same combinations. Furthermore, BER analysis confirms that the LRA algorithm delivers high-quality images at SNR levels above -5 dB in line-of-sight communication scenarios.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/s25020518 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!