function [A,S]=jade(X,m)
% Source separation of complex signals with JADE.
% Jade performs `Source Separation' in the following sense:
% X is an n x T data matrix assumed modelled as X = A S + N where
%
% o A is an unknown n x m matrix with full rank.
% o S is a m x T data matrix (source signals) with the properties
% a) for each t, the components of S(:,t) are statistically
% independent
% b) for each p, the S(p,:) is the realization of a zero-mean
% `source signal'.
% c) At most one of these processes has a vanishing 4th-order
% cumulant.
% o N is a n x T matrix. It is a realization of a spatially white
% Gaussian noise, i.e. Cov(X) = sigma*eye(n) with unknown variance
% sigma. This is probably better than no modeling at all...
%
% Jade performs source separation via a
% Joint Approximate Diagonalization of Eigen-matrices.
%
% THIS VERSION ASSUMES ZERO-MEAN SIGNALS
%
% Input :
% * X: Each column of X is a sample from the n sensors
% * m: m is an optional argument for the number of sources.
% If ommited, JADE assumes as many sources as sensors.
%
% Output :
% * A is an n x m estimate of the mixing matrix
% * S is an m x T naive (ie pinv(A)*X) estimate of the source signals
%
%
% Version 1.5. Copyright: JF Cardoso.
%
% See notes, references and revision history at the bottom of this file
[n,T] = size(X);
%% source detection not implemented yet !
if nargin==1, m=n ; end;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% A few parameters that could be adjusted
nem = m; % number of eigen-matrices to be diagonalized
seuil = 1/sqrt(T)/100;% a statistical threshold for stopping joint diag
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% whitening
%
if mseuil, %%% updates matrices M and V by a Givens rotation
encore = 1 ;
pair = [p;q] ;
G = [ c -conj(s) ; s c ] ;
V(:,pair) = V(:,pair)*G ;
M(pair,:) = G' * M(pair,:) ;
M(:,[Ip Iq]) = [ c*M(:,Ip)+s*M(:,Iq) -conj(s)*M(:,Ip)+c*M(:,Iq) ] ;
end%% if
end%% q loop
end%% p loop
end%% while
%%%estimation of the mixing matrix and signal separation
A = IW*V;
S = V'*Y ;
return ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Note 1: This version does *not* assume circularly distributed
% signals as 1.1 did. The difference only entails more computations
% in estimating the cumulants
%
%
% Note 2: This code tries to minimize the work load by jointly
% diagonalizing only the m most significant eigenmatrices of the
% cumulant tensor. When the model holds, this avoids the
% diagonalization of m^2 matrices. However, when the model does not
% hold, there is in general more than m significant eigen-matrices.
% In this case, this code still `works' but is no longer equivalent to
% the minimization of a well defined contrast function: this would
% require the diagonalization of *all* the eigen-matrices. We note
% (see the companion paper) that diagonalizing **all** the
% eigen-matrices is strictly equivalent to diagonalize all the
% `parallel cumulants slices'. In other words, when the model does
% not hold, it could be a good idea to diagonalize all the parallel
% cumulant slices. The joint diagonalization will require about m
% times more operations, but on the other hand, computation of the
% eigen-matrices is avoided. Such an approach makes sense when
% dealing with a relatively small number of sources (say smaller than
% 10).
%
%
% Revision history
%-----------------
%
% Version 1.5 (Nov. 2, 97) :
% o Added the option kindly provided by Jerome Galy
% (galy@dirac.ensica.fr) to compute the sample cumulant tensor.
% This option uses more memory but is faster (a similar piece of
% code was also passed to me by Sandip Bose).
% o Suppressed the useles variable `oui'.
% o Changed (angles=sign(angles(1))*angles) to (if angles(1)<0 ,
% angles= -angles ; end ;) as suggested by Iain Collings
% . This is safer (with probability 0 in
% the case of sample statistics)
% o Cosmetic rewriting of the doc. Fixed some typos and added new
% ones.
%
% Version 1.4 (Oct. 9, 97) : Changed the code for estimating
% cumulants. The new version loops thru the sensor indices rather than
% looping thru the time index. This is much faster for large sample
% sizes. Also did some clean up. One can now change the number of
% eigen-matrices to be jointly diagonalized by just changing the
% variable `nem'. It is still hard coded below to be equal to the
% number of sources. This is more economical and OK when the model
% holds but not appropriate when the model does not hold (in which
% case, the algorithm is no longer asymptotically equivalent to
% minimizing a contrast function, unless nem is the square of the
% number of sources.)
%
% Version 1.3 (Oct. 6, 97) : Added various Matalb tricks to speed up
% things a bit. This is not very rewarding though, because the main
% computational burden still is in estimating the 4th-order moments.
%
% Version 1.2 (Mar., Apr., Sept. 97) : Corrected some mistakes **in
% the comments !!**, Added note 2 `When the model does not hold' and
% the EUSIPCO reference.
%
% Version 1.1 (Feb. 94): Creation
%
%-------------------------------------------------------------------
%
% Contact JF Cardoso for any comment bug report,and UPDATED VERSIONS.
% email : cardoso@sig.enst.fr
% or check the WEB page http://sig.enst.fr/~cardoso/stuff.html
%
% Reference:
% @article{CS_iee_94,
% author = "Jean-Fran\c{c}ois Cardoso and Antoine Souloumiac",
% journal = "IEE Proceedings-F",
% title = "Blind beamforming for non {G}aussian signals",
% number = "6",
% volume = "140",
% month = dec,
% pages = {362-370},
% year = "1993"}
%
%
% Some analytical insights into the asymptotic performance of JADE are in
% @inproceedings{PerfEusipco94,
% HTML = "ftp://sig.enst.fr/pub/jfc/Papers/eusipco94_perf.ps.gz",
% author = "Jean-Fran\c{c}ois Cardoso",
% address = {Edinburgh},
% booktitle = "{Proc. EUSIPCO}",
% month = sep,
% pages = "776--779",
% title = "On the performance of orthogonal source separation algorithms",
% year = 1994}
%_________________________________________________________________________
% jade.m ends here