Selected Papers:
FWin transformer for dengue prediction under climate and ocean influence (with N.T. Tran and G. Zhou), arXiv:2403.07027,2024.
Fourier-Mixed Window Attention: Accelerating Informer for
Long Sequence Time-Series Forecasting (with N. T. Tran), arXiv:2307.00493, 2023.
Feature Affinity Assisted Knowledge Distillation and
Quantization of Deep Neural Networks on Label-Free Data (with
Z. Li, B. Yang, P. Yin, Y. Qi), IEEE Access, Vol 11, pp. 78042-78051, 2023. DOI:10.1109/ACCESS.2023.3297890
Convergence of Hyperbolic Neural Networks under Riemannian Stochastic Gradient Descent (with W. Whiting, B. Wang), Comm. Applied Math Computation, Oct, 2023;
doi:10.1007/s42967-023-00302-9.
A Proximal Algorithm for Network Slimming (with K. Bui, F. Xue,
F. Park, Y. Qi), in Proc. of the 9th International Conference on
Machine Learning, Optimization and Data Science, Grasmere, Lake District, England, Sept, 2023.
Weighted Anisotropic-Isotropic Total Variation (AITV) for Poisson Denoising
(with K. Bui, Y. Lou, F. Park), in IEEE International Conference on Image Processing,
1020-1024, 2023. DOI:10.1109/ICIP49359.2023.10222230
Difference of Anisotropic and Isotropic TV
(AITV) for Segmentation under Blur and Poisson Noise (with K. Bui,
Y. Lou, F. Park),
Frontiers in Computer Science, 5:1131317, June, 2023; doi:10.3389/fcomp.2023.1131317.
Glassoformer: a query-sparse transformer for post-fault power grid voltage
prediction (with Y. Zheng, C. Hu, G. Lin, M. Yue, B. Wang), in Proc. of
IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 3968--3972, 2022.
DOI:10.1109/ICASSP43922.2022.9747394
RARTS: An Efficient First-Order Relaxed Architecture Search Method (with F. Xue, Y-Y. Qi), IEEE ACCESS, 2022, doi:10.1109/ACCESS.2022.3185095.
Recurrence of Optimum for Training Weight and Activation Quantized
Networks (with Z. Long, P. Yin), Applied and Computational Harmonic Analysis,
62 (2023), pp. 41-65. Online August, 2022.
DOI: https://doi.org/10.1016/j.acha.2022.07.006.
Proximal Implicit ODE Solvers for
Accelerating Learning Neural ODEs (with J. Baker, H. Xia, Y. Wang, E. Cherkaev, A. Narayan, L. Chen, A. Bertozzi, S. Osher, B. Wang), arXiv:2204.08621, April 19, 2022.
Searching Intrinsic Dimensions of
Vision Transformers (with F. Xue, B. Yang, Y-Y. Qi), in Proc. of the Porto 20th International Conference on Innovations in Engineering and Sciences, pp. 18-24, 2022.
DOI:10.17758/HEAIG10.H0622602
Channel Pruning in
Quantization-aware Training: An Adaptive Projection-Gradient-Descent-Shrinkage-Splitting
Method (with Z. Li), arXiv:2204.04375, in IEEE International
Conference on AI for Industries, pp. 31-34, 2022;
DOI:10.1109/AI4154798.2022.00015
Enhancing Zero-Shot Many to Many Voice Conversion
via Self-Attention VAE with Structurally Regularized Layers
(with Z. Long, Y. Zheng, M. Yu), arxiv preprint:2203.16037,
in IEEE International Conference on AI for Industries, pp. 59-63, 2022;
DOI:10.1109/AI4154798.2002.00022.
Accompanying Voice Conversion Demo.
An Efficient Smoothing and Thresholding Image Segmentation Framework
with Weighted Anisotropic-Isotropic Total Variation (AITV)
(with K. Bui,Y. Lou, F. Park),
arXiv preprint:2202.10115, Comm. on Applied Math and Computation, 2024. DOI:10.1007/s42967-023-00339-w.
An Integrated Recurrent Neural Network
and Regression Model with Spatial and Climatic Couplings for Vector-Borne Disease
Dynamics (with Z. Li, G. Zhou), ICPRM, pp. 505-510, Feb, 2022.
DOI:10.5220/0010762700003122.
Improving Network Slimming with Nonconvex Regularization (with K. Bui,
F. Park, S. Zhang, Y. Qi), IEEE Access,
Vol. 9, pp. 115292-115314, 2021; DOI:10.1109/ACCESS.2021.3105366.
Network Compression via Cooperative Architecture Search and Distillation (with F. Xue), pp. 42-43, IEEE International Conference on AI for Industries (AI4I), Sept., 2021;
DOI:10.1109/AI4151902.2021.00018.
Improving Efficient Semantic Segmentation Networks by Enhancing Multi-Scale Feature
Representation via Resolution Path Based Knowledge Distillation and Pixel Shuffle (with
B. Yang, F. Xue, Y. Qi), pp. 325-336, in Proc. of the 16th International Symposium on Visual Computing, Oct, 2021, Springer, Cham. DOI:10.1007/978-3-030-90436-4_26
AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks (with J. Lyu, S. Zhang, Y-Y. Qi), in Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2020), San Diego, CA, August, 2020. DOI:10.1145/3394486.3403103
Selected as a KDD most influential paper on paperdigest.org on March 8, 2021
Nonconvex Regularization for Network Slimming: Compressing CNNs Even More
(with K. Bui, F. Park, S. Zhang, Y. Qi), Awarded Springer-Verlag Best Paper
at the 15th International Symposium on Visual Computing, Oct. 5-7, 2020. DOI:10.1007/978-3-030-64556-4_4
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets (with P. Yin, J. Lyu, S. Zhang, S. Osher, Y-Y. Qi), in Proceedings of International Conference on Learning Representations (ICLR), New Orleans, May, 2019.
Learning Quantized Neural Nets
by Coarse Gradient Method for Nonlinear Classification (with Z. Long, P. Yin),
Research in the Mathematical Sciences, 8, 48 (2021),
https://doi.org/10.1007/s40687-021-00281-4..
An Integrated Approach to Produce
Robust Deep Neural Network
Models with High Efficiency (with Z. Li, B. Wang), in Proc. of
the 7th International Conference on Machine Learning, Optimization and
Data Science, Oct. 2021; won Best Paper Award, Nov 25, 2021; LNCS 13164, pp. 451-465, 2022.
A Spatial-Temporal Graph Based Hybrid Infectious Disease Model with Application to COVID-19 (with Y. Zheng, Z. Li, G. Zhou),
arXiv preprint arXiv:2010.09077, Oct., 2020; in Proc. of the 10th International Conference
on Pattern Recognition Applications and Methods (ICPRAM), Feb 4-6, 2021;
Vol. 1 (Deep Learning and Neural Networks; Machine Learning Methods), pp. 357-364,
DOI:10.5220/0010349003570364.
A Recurrent Neural Network
and Differential Equation Based Spatiotemporal Infectious Disease Model
with Application to COVID-19 (with Z. Li, Y. Zheng, G. Zhou),
in Proc. of the 12th International Conference on Knowledge Discovery and
Information Retrieval (KDIR), Nov 2-4, 2020;
Vol. 1, pp. 93-103, 2020. DOI:10.5220/0010130000930103.
(arXiv preprint,
arXiv:2007.10929v1, July, 2020. MedRxiv: https://doi.org/10.1101/2020.07.20.20158568.)
Structured Sparsity
of Convolutional Neural Networks via Nonconvex Sparse Group Regularization
(with K. Bui, F. Park, S. Zhang, Y. Qi),
Frontiers in Applied Mathematics and Statistics:
Mathematics of Computation and
Data Science, 6:529564, 2021. DOI:10.3389/fams.2020.529564.
Lorentzian Peak Sharpening and Sparse Blind Source Separation for NMR Spectroscopy (with Y. Sun), Signal, Image and Video Processing, 16(3), pp. 633-641, 2022. DOI:10.1007/s11760-021-02002-4.
Structure Assisted NMF Methods for Separation of Degenerate Mixture Data
with Application to NMR Spectroscopy (with Y. Sun, K. Huang), International
Journal of Mathematics and Computation, 33(1), 2022. ISSN 0974-570X (Online).
A Weighted Difference of Anisotropic and Isotropic Total Variation
(AITV) for Relaxed Mumford-Shah Color and Multiphase Image Segmentation (with K. Bui, F. Park, Y. Lou),
arXiv:2005.04401, 2020; SIAM J. Imaging Sci, 14(3), 1078-1113, 2021. DOI:10.1137/20M1337041.
Global Convergence and Geometric Characterization of Slow to
Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data (with Z. Long, P. Yin),
arxiv preprint, arXiv:2002.12563v2, 2020; Inverse Problems and Imaging, 15(1), pp. 41-62, Feb, 2021.
Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets (with T. Dinh, B. Wang, A. Bertozzi, S. Osher), in Proceedings of the 6th International Conference
on Machine Learning, Optimization, and Data Science, Siena-Tuscany, Italy; July, 2020. LNCS 12566,
pp. 362-381, 2020.
Awarded 1st Special Mention. DOI:10.1007/978-3-030-64580-9_31.
Channel Pruning for Deep Neural Networks via a Relaxed Groupwise Splitting Method, (with B. Yang, J. Lyu, S. Zhang, Y-Y Qi), IEEE International Conference on AI for Industries (AI4I), 2019.
A Multistage Backward Differentiable Method for Constructing Light Convolutional Neural Networks (with F. Xue, J. Lyu, S. Zhang, Y-Y Qi),
IEEE International Conference on AI for Industries (AI4I), 2019.
Training Quantized Deep Neural Networks and Applications with
Blended Coarse Gradient Descent, SIAM News, 52(4), May 2019.
Learning Sparse
Neural Networks via L0 and TL1 by a Relaxed Variable Splitting Method
with Application to Multi-scale Curve Classification (with F. Xue),
arXiv preprint arXiv: 1902.07419, Feb 20, 2019; in Proc. World Congress Global Optimization, Metz, France, July, 2019. DOI:10.1007/978-3-030-21803-4_80. In: Le Thi H., Minh Le H., Pham Dinh T. (eds), Optimization of Complex Systems: Theory, Models, Algorithms and Applications, pp. 800-809, Advances in Intelligent Systems and Computing, v. 991, Springer, 2020.
A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting (with Z. Li, X. Luo, B. Wang, A. Bertozzi ), arXiv preprint arXiv: 1902.05113, Feb. 14, 2019; in Proc. World Congress Global Optimization, Metz, France, July 2019. DOI:10.1007/978-3-030-21803-4_73. In: Le Thi H., Minh Le H., Pham Dinh T. (eds), Optimization of Complex Systems: Theory, Models, Algorithms and Applications, pp. 730-739, Advances in Intelligent Systems and Computing, v. 991, Springer, 2020.
Convergence of
a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning
Sparse Weight Binarized Activation Neural Networks (with T. Dinh),
arXiv preprint arXiv: 1901.09731, 2019; Frontiers in Applied Mathematics and Statistics: Mathematics of Computation and Data Science, 6:13, 2020. DOI:10.3389/fams.2020.00013.
Convergence of a Relaxed
Variable Splitting Method for Learning Sparse Neural Networks via
L1, L0, and transformed-L1 Penalties (with T. Dinh),
arXiv:1812.05719, 2018; in Proceedings of Intelligent Systems
Conference (IntelliSys) 2020, Sept. 3-4, Amsterdam, The Netherlands.
In: Arai K, Kapoor S, Bhatia R. (eds)
Intelligent Systems and Applications, pp. 360-374. Advances in Intelligent Systems and
Computing, vol. 1250. Springer, Cham.
https://doi.org/10.1007/978-3-030-55180-3_27
Sparse Kalman Filtering Approaches to Realized Covariance Estimation from High Frequency Financial Data (with M. Ho), Mathematical Programming, Series B, 176(1-2), pp. 247-278, July, 2019. DOI:https://doi.org/10.1007/s10107-019-01371-6.
Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks (with P. Yin, S. Zhang, J. Lyu, S. Osher, Y-Y. Qi), Research in the Mathematical Sciences, 6(1):14, 2019.
DOI:10.1007/s40687-018-0177-6,
arXiv preprint arXiv:1808.05240.
BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights (with P. Yin, S. Zhang, J. Lyu, S. Osher, Y-Y. Qi), SIAM Journal on Imaging Sciences, 11(4),: 2205-2223, 2018;
DOI:10.1137/18M1166134; arXiv preprint arXiv:1801.06313, 2018.
Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection (with P. Yin, S. Zhang, Y-Y. Qi), arXiv:1612.06052v2, 2016; J. Computational Mathematics, 37(3): 349-359, 2019.
Online 9/2018, Doi:10.4208/jcm.1803-m2017-0301.
Featured article in 2019: www.global-sci.org/intro/more.html?journal=jcm&type=8.
Deep Learning for Real-Time Crime Forecasting and its Ternarization (with B. Wang, P. Yin, A. Bertozzi, J. Brantingham, S. Osher), arxiv:1711.08833, 2017; Chinese Annals of Mathematics, CAM Series B, 40(6), pp. 949-966, 2019.
Minimization of Transformed L1 Penalty: Theory, Difference of Convex Function Algorithm, and Robust Application in Compressed Sensing (with. S. Zhang), Mathematical Programming, Series B, 169(1), pp. 307-336, 2018. online Mar. 5, 2018; https://doi.org/10.1007/s10107-018-1236-x. Springer Nature Sharing: http://rdcu.be/IoQX
Three L1 based nonconvex methods in constructing
sparse mean reverting portfolios (with X. Long, K. Solna),
J. Sci. Computing, 75(2), pp. 1156-1186, 2018. online Oct. 20, 2017, DOI:10.1007/s10915-017-0578-5.
L1 Minimization Method for
Link Flow Correction
(with P. Yin, Z. Sun, W. Jin),
Transportation Research, Part B: Methodological, 104 (2017),
pp. 398--408..
Linear Feature Transform and Enhancement of Classification on Deep Neural Network (with P. Yin, Y-Y. Qi),
CAM report 16-41, UCLA, 2016; J. Sci. Computing, 76(3), pp. 1396-1406, 2018.
Online Feb. 23, 2018, https://doi.org/10.1007/s10915-018-0666-1
Difference-of-Convex Learning: Directional Stationarity, Optimality,
and Sparsity (with M. Ahn, J-S. Pang), SIAM J. Optimization, 27(3), pp. 1637-1665, 2017.
Transformed Schatten-1 Iterative Thresholding Algorithms for
Low Rank Matrix Completion (with S. Zhang, P. Yin), Comm. Math Sci, 15(3),
pp. 839--862, 2017.
Computational Modeling of
Spectral Data Fitting with
Nonlinear Distortions
(with Y. Sun, W. Wu),
Signal, Image, and Video Processing, 11(4):651-658 (2017).
DOI 10.1007/s11760-016-1006-2.
Iterative L1 minimization for non-convex compressed sensing (with P. Yin),
J. Computational Mathematics, 35(4), 2017, pp. 437--449.
Minimization of transformed L1 penalty: closed form representation and iterative thresholding algorithms (with S. Zhang), Comm. Math Sci,
15(2), pp. 511--537, 2017.
Point Source Super-resolution via Non-convex L1 Based
Methods (with Y. Lou, P. Yin),
J. Sci. Computing, 68(3), pp. 1082-1100, 2016.
DOI
10.1007/s10915-016-0169-x
Weighted Elastic Net Penalized Mean-Variance Portfolio Design and Computation
(with M. Ho, Z. Sun), SIAM J. Financial Mathematics, Vol. 6,
pp. 1220-1244, 2015.
A Weighted Difference of Anisotropic and Isotropic Total Variation (AITV) Model for Image Processing
(with. Y. Lou, T. Zeng, S. Osher),
SIAM J. Imaging Sci, 8(3), pp. 1798-1823, 2015.
Parallelization of a Color-Entropy
Preprocessed
Chan-Vese Model for Face Contour Detection on
Multi-core CPU and GPU
(with. X. Shi, F. Park. L. Wang, Y-Y. Qi),
Parallel Computing, 49(2015), pp. 28-49,
http://dx.doi.org/10.1016/j.parco.2015.07.002. CAM Report, 15-43, UCLA.
PhaseLiftOff: an Accurate and Stable Phase Retrieval Method Based on Difference of Trace and Frobenius Norms (with P. Yin),
Comm. Math Sciences, 13(4), pp. 1033-1049, 2015.
Computational aspects of constrained L1-L2 minimization
for compressive sensing (with Y. Lou, S. Osher),
in Modelling, Computation and Optimization in Information Systems and
Management Sciences, Springer Series on
Advances in Intelligent Systems and Computing 359,
pp. 169--180, eds. H.A. Le Thi, T. Pham Dinh, N.T. Nguyen, 2015.
CAM Report 15-08, UCLA.
Minimization of L1-L2 for Compressed Sensing (with P. Yin, Y. Lou, Q. He),
SIAM J. Scientific Computing, 37(1), pp. A536-A563, 2015.
Computing Sparse Representation in a
Highly Coherent Dictionary Based on Difference of L1 and L2 (with Y. Lou,
P. Yin, Q. He),
J. Scientific Computing, Volume 64, Issue 1 (2015), pp. 178-196.
Online: Oct 2014, DOI 10.1007/s10915-014-9930-1.
A Geometric Blind Source Separation Method Based on
Facet Component Analysis (with P. Yin, Y. Sun),
Signal, Image, and Video Processing, 10(1), pp. 19-28, 2016.
DOI 10.1007/s11760-014-0696-6).
A sparse semi-blind source identification method and its application to Raman spectroscopy for explosives detection (with Y. Sun),
Signal Processing, Vol. 96, Part B, pp. 332-345, 2014.
Partially blind deblurring of barcode from out-of-focus blur
(with Y. Lou, E. Esser, H.K. Zhao),
SIAM Journal on Imaging Sciences, 7(2), 2014, pp. 740--760.
A Semi-Blind Source Separation Method for
Differential Optical Absorption Spectroscopy of
Atmospheric Gas Mixtures (with Y. Sun, L.M. Wingen,
B.J. Finlayson-Pitts), Inverse Problems and Imaging, 8(2), 2014, pp. 587-610.
Ratio and Difference of L1 and L2 Norms and Sparse
Representation with Coherent Dictionaries
(with P. Yin, E. Esser),
Communications in Information & Systems,
14(2), 2014, pp. 87--109.
A Method for Finding Structured Sparse Solutions
to Non-negative Least Squares Problems
with Applications (with E. Esser, Y. Lou),
SIAM Journal on Imaging Sciences, 6(4), pp. 2010--2046, 2013.
A Randomly Perturbed Infomax Algorithm for Blind Source Separation (with Q. He),Proceedings of the 38th International Conference on Acoustics, Speech,
and Signal Processing (ICASSP), Vancouver, pp. 3218 - 3222, May 2013.
Hybrid Deterministic-Stochastic Gradient Langevin
Dynamics for Bayesian Learning
(with Q. He),
Communications in Information & Systems
Vol. 12, No. 3, pp. 221-232, 2012.
Nonnegative Sparse Blind Source Separation for NMR Spectroscopy by Data Clustering, Model Reduction,
and L1 Minimization (with Y. Sun), SIAM J. Imaging Sciences, 5(3), 2012, pp. 886-911.
A Convex Model for Non-negative Matrix Factorization and
Dimensionality Reduction
on Physical Space
(with E. Esser, M. Moller, S. Osher, G. Sapiro),
IEEE Transactions on Image Processing, 21(7), pp. 3239-3252, 2012.
Multi-Channel L1 Regularized Convex Speech Enhancement Model and
Fast Computation by the Split Bregman Method (with M. Yu, W. Ma, S. Osher),
IEEE Transactions on Audio, Speech and Language Processing,
20(2), pp 661-675, 2012. DOI:10.1109/TASL.2011.2164526.
A Recursive Sparse Blind Source Separation Method and its Application to Correlated Data
in NMR Spectroscopy of Biofluids (with Y. Sun), Journal of Scientific Computing, 51(3), 2012,
pp. 733--753.
A Convex Model and L1 Minimization for Musical Noise Reduction in Blind Source Separation (with W. Ma, M. Yu, S. Osher), Comm Math Sciences, 10(1), pp 223-238, 2012.
Under-determined Sparse Blind Source Separation of Nonnegative and Partially
Overlapped Data (with Y. Sun),
SIAM J. Scientific Computing, Vol. 33, No. 4, pp 2063-2094, 2011.
Content Adaptive Image Matching by Color-Entropy Segmentation and Inpainting
(with Y. Sun),
the 14th International Conference on Computer Analysis
of Images and Patterns, Pedro Real et al (Eds.), CAIP 2011,
LNCS 6855, pp. 471-478, 2011, Springer-Verlag.
Modeling Category Identification Using Sparse Instance Representation (with S. Zhang, M. Lee, M. Yu),
Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin, TX, pp. 2574--2579, 2011.
Postprocessing and Sparse Blind Source Separation of Positive and Partially
Overlapped Data (with Y. Sun, C. Ridge, F. del Rio, A.J. Shaka),
Signal Processing 91(8)(2011), pp 1838-1851.
Convexity and Fast Speech Extraction by Split Bregman Method,
(with M. Yu, W. Ma, S. Osher), Interspeech 2010, pp 398-401, Sept 26-30,
2010, Chiba, Japan.
Reducing Musical Noise in Blind Source Separation by Time-Domain
Sparse Filters and Split Bregman Method (with W. Ma, M. Yu, S. Osher),
Interspeech 2010, pp 402-405, Sept 26-30, 2010,
Chiba, Japan.
Stochastic Approximation and a Nonlocally Weighted Soft-Constrained Recursive
Algorithm for Blind Separation of Reverberant Speech Mixtures (with M. Yu),
Discrete and Continuous Dynamical Systems-A, Vol 28, No. 4, pp 1753-1767, 2010.
A Nonlocally Weighted Soft-Constrained Natural Gradient Algorithm for Blind
Source Separation of Reverberant Speech
(with M. Yu, Y-Y Qi, H-I Yang, F-G Zeng), Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp 81-84, Oct 2009, New Paltz, NY, eds. Jacob Benesty and Tomas Gaensler.
A Time Domain Blind Decorrelation Method of Convolutive Mixtures Based on an IIR Model
(with J. Liu, Y-Y Qi), J. Computational Mathematics, Vol. 28, No. 3, pp 371-385, 2010.
A Soft-Constrained Dynamic Iterative Method of Blind Source Separation
(with J. Liu, Y-Y Qi), SIAM J. Multiscale Modeling Simulations, Vol. 7, No. 4,
pp 1795-1810, 2009.
A Time Domain Algorithm for Blind
Separation of Convolutive Sound Mixtures
and L-1 Constrained Minimization of
Cross Correlations
(with J. Liu, Y-Y Qi, F-G. Zeng),
Comm. Math Sci, Vol. 7, No. 1, 2009, pp 109-128.
A Dynamic Algorithm for Blind Separation of Convolutive Sound Mixtures
(with J. Liu, Y-Y Qi),
Neurocomputing, 72(2008), pp 521-532.
A Many to One Discrete Auditory Transform
(with Y-Y Qi),
Proceedings of ICCM 2007, Vol. III, pp 812-826, Hangzhou, China.
Ear Modeling and Sound Signal Processing,
Proceedings of ICCM 2004, Hong Kong; New Studies in Advanced Mathematics, ed. S-T Yau,
Vol. 42, pp 819 -- 830, AMS and International Press, 2008.
A Study of Hearing Aid Gain Functions Based on a Nonlinear Nonlocal
Feedforward Cochlea Model (with Y.S. Kim, Y-Y Qi),
Hearing Research, Volume 215, Issues 1-2, May 2006, pp 84-96.
Signal Processing of Acoustic Signals in the Time Domain with an Active
Nonlinear Nonlocal Cochlear Model (with M. D. LaMar, Y-Y Qi),
Signal Processing, 86(2006), pp 360-374.
An Orthogonal Discrete Auditory Transform (with Y-Y Qi),
Communications in Math Sciences, Vol. 3, No. 2, pp 251-259, 2005.
A two-dimensional nonlinear nonlocal
feed-forward cochlear model and time domain computation of
multitone interactions (with Y.S. Kim), SIAM J. Multiscale Modeling
and Simulation, Vol. 4, No. 2, pp 664-690, 2005.
An Invertible Discrete Auditory Transform (with Y-Y Qi),
Communications in Math Sciences, Vol. 3, No. 1, pp 47-56, 2005.
Dispersive instability and its minimization in time
domain computation of steady state responses of cochlear models,
Journal of the Acoustical Society of America, 115 (5), Pt. 1,
2173-2177, 2004.
Global well-posedness and multi-tone solutions
of a class of nonlinear nonlocal cochlear models in hearing (with Y-Y Qi),
Nonlinearity, Vol. 17 (2004), No. 2, pp 711-728.
A PDE based two level model of the masking property of the human ear (with Y-Y Qi),
Communications in Math Sciences, 2003, Vol. 1, No. 4, pp. 833-840.
Time Domain Computation of a Nonlinear Nonlocal Cochlear Model
with Applications to Multitone Interaction in Hearing (with Y-Y Qi, L. Deng),
Communications in Math Sciences, Vol. 1, No. 2, pp. 211-227, 2003.
Modeling Vocal Fold Motion with a Hydrodynamic Semi-Continuum
Model (with M.D. LaMar, Y-Y Qi), Journal of the Acoustical
Society of America, Vol. 114, No. 1,
pp 455-464, 2003.
A Perception and PDE Based Nonlinear Transformation
for Processing Spoken Words, (with Y-Y. Qi), Physica D 149 (2001),143-160.
Modeling Vocal Fold Motion with a Continuum Fluid Dynamic
Model (with J. M. Hyman, and Y-Y Qi),
http://arXiv.org/abs/nlin.PS/0108039.
Click here for demo.