The characterization of inorganic elements in the produced water (PW) samples is a difficult task because of the complexity of the matrix. This work deals with a study of a methodology for dissolved Fe quantification in PW from oil industry by flame atomic absorption spectrometry (FAAS) after cloud point extraction (CPE). The procedure is based on the CPE using PAN as complexing agent and Triton X-114 as surfactant. The best conditions for Fe extraction parameters were studied using a Box-Behnken design. The proposed method presented a LOQ of 0.010μgmL and LOD of 0.003μgmL. The precision of the method was evaluated in terms of repeatability, obtaining a coefficient of variation of 2.54%. The accuracy of the method was assessed by recovery experiments of Fe spiked that presented recovery of 103.28%. The method was applied with satisfactory performance to determine Fe by FAAS in PW samples.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.marpolbul.2016.10.068 | DOI Listing |
Sci Data
December 2024
National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Huazhong Agricultural University, Wuhan, 430070, P. R. China.
Point cloud analysis is a crucial task in computer vision. Despite significant advances over the past decade, the developments in agricultural domain have faced challenges due to a scarcity of datasets. To facilitate 3D point cloud research in agriculture community, we introduce Crops3D, the diverse real-world dataset derived from authentic agricultural scenarios.
View Article and Find Full Text PDFJ Imaging
December 2024
European Commission, Joint Research Centre (JRC), Via Enrico Fermi 2749, 21027 Ispra, Italy.
In this paper, we face the point-cloud segmentation problem for spinning laser sensors from a deep-learning (DL) perspective. Since the sensors natively provide their measurements in a 2D grid, we directly use state-of-the-art models designed for visual information for the segmentation task and then exploit the range information to ensure 3D accuracy. This allows us to effectively address the main challenges of applying DL techniques to point clouds, i.
View Article and Find Full Text PDFJ Imaging
December 2024
Faculty of Sustainable Design Engineering, University of Prince Edward Island, Charlottetown, PE C1A 4P3, Canada.
This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities.
View Article and Find Full Text PDFJ Imaging
December 2024
National Electronic and Computer Technology Center, National Science and Technology Development Agency, Khlong Luang, Pathum Thani 12120, Thailand.
Accurate human action recognition is becoming increasingly important across various fields, including healthcare and self-driving cars. A simple approach to enhance model performance is incorporating additional data modalities, such as depth frames, point clouds, and skeleton information, while previous studies have predominantly used late fusion techniques to combine these modalities, our research introduces a multi-level fusion approach that combines information at early, intermediate, and late stages together. Furthermore, recognizing the challenges of collecting multiple data types in real-world applications, our approach seeks to exploit multimodal techniques while relying solely on RGB frames as the single data source.
View Article and Find Full Text PDFBiomimetics (Basel)
December 2024
Institute of Instrument Science and Engineering, Southeast University, Nanjing 210096, China.
The realization of hand function reengineering using a manipulator is a research hotspot in the field of robotics. In this paper, we propose a multimodal perception and control method for a robotic hand to assist the disabled. The movement of the human hand can be divided into two parts: the coordination of the posture of the fingers, and the coordination of the timing of grasping and releasing objects.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!