This paper proposes a two phases-based training method to design the codewords to map the cluster indices of the input feature vectors to the outputs of the new perceptrons with the multi-pulse type activation functions. Our proposed method is applied to classify two types of the tachycardias. First, the total number of the new perceptrons is initialized as the dimensions of the input feature vectors.
View Article and Find Full Text PDFIEEE Trans Image Process
September 2021
Face hallucination or super-resolution is a practical application of general image super-resolution which has been recently studied by many researchers. The challenge of good face hallucination comes from a variety of poses, illuminations, facial expressions, and other degradations. In many proposed methods, researchers resolve it by using a generative neural network to reduce the perceptual loss so we can generate a photo-realistic image.
View Article and Find Full Text PDFIEEE Trans Image Process
January 2021
To improve the coding performance of depth maps, 3D-HEVC includes several new depth intra coding tools at the expense of increased complexity due to a flexible quadtree Coding Unit/Prediction Unit (CU/PU) partitioning structure and a huge number of intra mode candidates. Compared to natural images, depth maps contain large plain regions surrounded by sharp edges at the object boundaries. Our observation finds that the features proposed in the literature either speed up the CU/PU size decision or intra mode decision and they are also difficult to make proper predictions for CUs/PUs with the multi-directional edges in depth maps.
View Article and Find Full Text PDFIEEE Trans Image Process
September 2019
Screen content coding (SCC) is an extension of high efficiency video coding by adopting new coding modes to improve the coding efficiency of SCC at the expense of increased complexity. This paper proposes an online-learning approach for fast mode decision and coding unit (CU) size decision in SCC. To make a fast mode decision, the corner point is first extracted as a unique feature in screen content, which is an essential pre-processing step to guide Bayesian decision modeling.
View Article and Find Full Text PDFInt J Bioinform Res Appl
May 2015
DNA microarray experiment unavoidably generates gene expression data with missing values. This hardens subsequent analysis such as biclusters detection which aims to find a set of co-expressed genes under some experimental conditions. Missing values are thus required to be estimated before biclusters detection.
View Article and Find Full Text PDFIEEE Trans Image Process
February 2009
In video applications where video sequences are compressed and stored in a storage device for future delivery, the encoding process is typically carried out without enough prior knowledge about the channel characteristics of a network. Error-resilient transcoding plays an important role to provide an addition of resilience to the video data, where or whenever it is needed. Recently, a reference picture selection (RPS) scheme has been adopted in an error-resilient transcoder in order to reduce error effects for the already encoded video bitstream.
View Article and Find Full Text PDFTranscoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained.
View Article and Find Full Text PDFIEEE Trans Image Process
September 2007
MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation.
View Article and Find Full Text PDFIEEE Trans Image Process
May 2005
In order to reduce the computation load, many conventional fast block-matching algorithms have been developed to reduce the set of possible searching points in the search window. All of these algorithms produce some quality degradation of a predicted image. Alternatively, another kind of fast block-matching algorithms which do not introduce any prediction error as compared with the full-search algorithm is to reduce the number of necessary matching evaluations for every searching point in the search window.
View Article and Find Full Text PDF