Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why the three number in 'yhat' is same #216

Closed
SikangSHU opened this issue Feb 18, 2022 · 13 comments
Closed

why the three number in 'yhat' is same #216

SikangSHU opened this issue Feb 18, 2022 · 13 comments

Comments

@SikangSHU
Copy link

Hello!
I don't know why the three number in 'yhat' is same and 'w_hat' is all zero. I'd like to get 'w_hat' with a few non-zero numbers so that I can know which group of 'X' is used. Can you help me with that? Thank you very much!

###############################################################################
# Setup
# -----

import matplotlib.pyplot as plt
import numpy as np
from numpy import linalg
from skimage import io
from skimage.color.colorconv import _prepare_colorarray
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error

from group_lasso import GroupLasso

np.random.seed(0)     
GroupLasso.LOG_LOSSES = True

###############################################################################
# Set dataset parameters
# ----------------------
group_sizes = [3, 3, 2, 2, 2]
active_groups = np.ones(5)
active_groups[:3] = 2
np.random.shuffle(active_groups)
np.random.shuffle(active_groups)
groups = np.concatenate(
    [size * [i] for i, size in enumerate(group_sizes)]
).reshape(-1, 1)
num_coeffs = sum(group_sizes)
num_datapoints = 3

print("group_sizes:", group_sizes)
print("active_groups:", active_groups)
print("groups:", groups.shape)
print("num_coeffs:", num_coeffs)
print("____________________________________________")

###############################################################################
# Generate data matrix
# --------------------
X = np.array([[0.571, 0.095, 0.767, 0.095, 0.105, 0.767, 0.571, 0.767, 0.095, 0.767, 0.105, 0.767],
              [0.584, 0.258, 0.576, 0.258, 0.758, 0.576, 0.584, 0.576, 0.258, 0.576, 0.758, 0.576],
              [0.577, 0.961, 0.284, 0.961, 0.644, 0.284, 0.577, 0.284, 0.961, 0.284, 0.644, 0.284]])

print("X:", X.shape)
print("____________________________________________")

###############################################################################
# Generate coefficients
# ---------------------k
w = np.concatenate(
    [
        np.random.standard_normal(group_size) * is_active
        for group_size, is_active in zip(group_sizes, active_groups)
    ]
)
w = w.reshape(-1, 1)
true_coefficient_mask = w != 0
intercept = 2

print("w:", w.shape)
print("true_coefficient_mask:", true_coefficient_mask.sum())
print("____________________________________________")

###############################################################################
# Generate regression targets
# ---------------------------

y_true = X @ w
y = np.array([[-0.17997138],
              [-0.15219182],
              [-0.17062552]])
y_true = X @ w
print("y:", y)
MSE1 = mean_squared_error(y, y_true)
print("MSE_yt_y:", MSE1)
print("____________________________________________")

###############################################################################
# Generate estimator and train it
# -------------------------------
gl = GroupLasso(
    groups=groups,
    group_reg=5,
    l1_reg=2,
    frobenius_lipschitz=True,
    scale_reg="inverse_group_size",
    subsampling_scheme=1,
    supress_warning=True,
    n_iter=1000,
    tol=1e-3,
)
gl.fit(X, y)

###############################################################################
# Extract results and compute performance metrics
# -----------------------------------------------

# Extract info from estimator
yhat = gl.predict(X)
sparsity_mask = gl.sparsity_mask_
w_hat = gl.coef_

print("yhat:", yhat)
print("w_hat:", w_hat.sum())
print("sparsity_mask:", sparsity_mask)
print("____________________________________________")

# Compute performance metrics
R2 = r2_score(y, yhat)
MSE_y_yh = mean_squared_error(y, yhat)
print("MSE_y_yh:", MSE_y_yh)
print("____________________________________________")

And the result of the program after running is as follows.

group_sizes: [3, 3, 2, 2, 2]
active_groups: [2. 2. 2. 1. 1.]
groups: (12, 1)
num_coeffs: 12
____________________________________________
X: (3, 12)
____________________________________________
w: (12, 1)
true_coefficient_mask: 12
____________________________________________
y: [[-0.17997138]
 [-0.15219182]
 [-0.17062552]]
MSE_yt_y: 55.67436677644974
____________________________________________
yhat: [-0.16644355 -0.16644355 -0.16644355]
w_hat: 0.0
sparsity_mask: [False False False False False False False False False False False False]
____________________________________________
MSE_y_yh: 0.0001345342966391801
____________________________________________
@mathurinm
Copy link
Owner

Hello,
I had to adapt your code because it seemed to be using some other package. Note that the group format you used is not supported by celer, so I changed this part, as well as made y 1D instead of 2D, and made X fortran-ordered.

The issue is that your regularization strength, alpha in celer notation, is too high, thus leading to a model where w_hat is 0. Thus your prediction is constant, because y_hat = X @ clf.coef_ + clf.intercept_ and clf.coef_, aka w_hat in your notation, is 0.

You can compute alpha_max, the smallest regularization value for which this happens. Then, setting alpha = rho * alpha_max with rho < 1 will give you a non zero model.
See the following code:

import numpy as np
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error

from celer.homotopy import _alpha_max_grp
from celer import GroupLasso

np.random.seed(0)

###############################################################################
# Set dataset parameters
# ----------------------
group_sizes = [3, 3, 2, 2, 2]
active_groups = np.ones(5)
active_groups[:3] = 2
np.random.shuffle(active_groups)
np.random.shuffle(active_groups)
groups = np.concatenate(
    [size * [i] for i, size in enumerate(group_sizes)]
).reshape(-1, 1)
num_coeffs = sum(group_sizes)
num_datapoints = 3

print("group_sizes:", group_sizes)
print("active_groups:", active_groups)
print("groups:", groups.shape)
print("num_coeffs:", num_coeffs)
print("____________________________________________")

###############################################################################
# Generate data matrix
# --------------------
X = np.array([[0.571, 0.095, 0.767, 0.095, 0.105, 0.767, 0.571, 0.767, 0.095, 0.767, 0.105, 0.767],
              [0.584, 0.258, 0.576, 0.258, 0.758, 0.576,
                  0.584, 0.576, 0.258, 0.576, 0.758, 0.576],
              [0.577, 0.961, 0.284, 0.961, 0.644, 0.284, 0.577, 0.284, 0.961, 0.284, 0.644, 0.284]])

print("X:", X.shape)
print("____________________________________________")

###############################################################################
# Generate coefficients
# ---------------------k
w = np.concatenate(
    [
        np.random.standard_normal(group_size) * is_active
        for group_size, is_active in zip(group_sizes, active_groups)
    ]
)
w = w.reshape(-1, 1)
true_coefficient_mask = w != 0
intercept = 2

print("w:", w.shape)
print("true_coefficient_mask:", true_coefficient_mask.sum())
print("____________________________________________")

###############################################################################
# Generate regression targets
# ---------------------------

y_true = X @ w
y = np.array([[-0.17997138],
              [-0.15219182],
              [-0.17062552]])
y_true = X @ w
print("y:", y)
MSE1 = mean_squared_error(y, y_true)
print("MSE_yt_y:", MSE1)
print("____________________________________________")

###############################################################################
# Generate estimator and train it
# -------------------------------
groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]]

X = np.asfortranarray(X)
y = y.squeeze()
alpha_max = _alpha_max_grp(X, y, groups_celer, center=True)


print(
    f"Maximal value of regularization strength to get a non zero solution w_hat: {alpha_max:.3f}")

alpha = alpha_max/10
gl = GroupLasso(
    groups=groups_celer,
    alpha=alpha,
    max_iter=1000,
    tol=1e-3,
    verbose=1,
)
gl.fit(X, y)

###############################################################################
# Extract results and compute performance metrics
# -----------------------------------------------

# Extract info from estimator
yhat = gl.predict(X)   # not constant !

@SikangSHU
Copy link
Author

SikangSHU commented Feb 21, 2022

Hello, thank you for your patient reply. I'd like to apply the program you give to do 4 stains unmixing for an RGB image and only one group in groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]] do contributions to each pixel in the image. But I found that group[3, 4, 5] in groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]] is always non-zero, so there's nothing in the image(CD8) after doing stain unmixing . I don't know why.
My program and the image for test is as follows.

import numpy as np
from numpy import linalg
import matplotlib.pyplot as plt
from skimage import io
from skimage.color.colorconv import _prepare_colorarray
from sklearn.metrics import mean_squared_error
import gc

from celer.homotopy import _alpha_max_grp
from celer import GroupLasso

img = io.imread('E:\\testpicture_jupyter\\4colour_unmixing\\test9_2.png')    
img1 = _prepare_colorarray(img, force_copy=True)    
np.maximum(img1, 1E-6, out=img1)    
y = np.log(img1)

X1 = np.array([[0.571, 0.584, 0.577], [0.095, 0.258, 0.961], [0.767, 0.576, 0.284]])  # CD8,PanCK,Hema
X1_inv = linalg.inv(X1)       
B1 = y @ X1_inv
y1 = B1 @ X1

X2 = np.array([[0.095, 0.258, 0.961], [0.105, 0.758, 0.644], [0.767, 0.576, 0.284]])  # PanCK,PD-L1,Hema
X2_inv = linalg.inv(X2)       
B2 = y @ X2_inv
y2 = B2 @ X2

X3 = np.array([[0.571, 0.584, 0.577], [0.767, 0.576, 0.284], [-0.48, 0.808, -0.343]])  # CD8,Hema
X3_inv = linalg.inv(X3)       
B3 = y @ X3_inv
y3 = B3 @ X3

X4 = np.array([[0.095, 0.258, 0.961], [0.767, 0.576, 0.284], [-0.553, 0.817, -0.165]])  # PanCK,Hema
X4_inv = linalg.inv(X4)       
B4 = y @ X4_inv
y4 = B4 @ X4

X5 = np.array([[0.105, 0.758, 0.644], [0.767, 0.576, 0.284], [-0.218, 0.649, -0.729]])  # PDL1,Hema
X5_inv = linalg.inv(X5)       
B5 = y @ X5_inv
y5 = B5 @ X5

X = np.array([[0.571, 0.095, 0.767, 0.095, 0.105, 0.767, 0.571, 0.767, 0.095, 0.767, 0.105, 0.767],
              [0.584, 0.258, 0.576, 0.258, 0.758, 0.576, 0.584, 0.576, 0.258, 0.576, 0.758, 0.576],
              [0.577, 0.961, 0.284, 0.961, 0.644, 0.284, 0.577, 0.284, 0.961, 0.284, 0.644, 0.284]])
X = np.asfortranarray(X)

groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]]

a = 0
b = 0
rgb_CD8 = np.zeros_like(y) + 1
rgb_PanCK = np.zeros_like(y) + 1
rgb_Hema = np.zeros_like(y) + 1
rgb_PDL1 = np.zeros_like(y) + 1

for i in y:
    for j in i:
        alpha_max = _alpha_max_grp(X, j, groups_celer, center=True)
        alpha = alpha_max / 3
        gl = GroupLasso(
            groups=groups_celer,
            alpha=alpha,
            max_iter=10,
            tol=1e-3,
            verbose=1,
        )
        gl.fit(X, j)
        w_hat = gl.coef_

        if w_hat[3] != 0:    
            null = np.zeros_like(B2[:, :, 0])
            B2_A = np.stack((B2[:, :, 0], null, null), axis=-1) 
            B2_B = np.stack((null, B2[:, :, 1], null), axis=-1)
            B2_C = np.stack((null, null, B2[:, :, 2]), axis=-1)
            conv_matrix = X2
            log_rgb21 = B2_A[a][b] @ conv_matrix
            rgb_PanCK[a][b] = np.exp(log_rgb21)
            log_rgb22 = B2_B[a][b] @ conv_matrix
            rgb_PDL1[a][b] = np.exp(log_rgb22)
            log_rgb23 = B2_C[a][b] @ conv_matrix
            rgb_Hema[a][b] = np.exp(log_rgb23)
        elif w_hat[0] != 0:             
            null = np.zeros_like(B1[:, :, 0])
            B1_A = np.stack((B1[:, :, 0], null, null), axis=-1)  
            B1_B = np.stack((null, B1[:, :, 1], null), axis=-1)  
            B1_C = np.stack((null, null, B1[:, :, 2]), axis=-1)  
            conv_matrix = X1
            log_rgb11 = B1_A[a][b] @ conv_matrix
            rgb_CD8[a][b] = np.exp(log_rgb11)
            log_rgb12 = B1_B[a][b] @ conv_matrix
            rgb_PanCK[a][b] = np.exp(log_rgb12)
            log_rgb13 = B1_C[a][b] @ conv_matrix
            rgb_Hema[a][b] = np.exp(log_rgb13)
        elif w_hat[6] != 0:  
            null = np.zeros_like(B3[:, :, 0])
            B3_A = np.stack((B3[:, :, 0], null, null), axis=-1)  
            B3_B = np.stack((null, B3[:, :, 1], null), axis=-1)  
            conv_matrix = X3
            log_rgb31 = B3_A[a][b] @ conv_matrix
            rgb_CD8[a][b] = np.exp(log_rgb31)
            log_rgb32 = B3_B[a][b] @ conv_matrix
            rgb_Hema[a][b] = np.exp(log_rgb32)
        elif w_hat[8] != 0: 
            null = np.zeros_like(B4[:, :, 0])
            B4_A = np.stack((B4[:, :, 0], null, null), axis=-1) 
            B4_B = np.stack((null, B4[:, :, 1], null), axis=-1) 
            conv_matrix = X4
            log_rgb41 = B4_A[a][b] @ conv_matrix
            rgb_PanCK[a][b] = np.exp(log_rgb41)
            log_rgb42 = B4_B[a][b] @ conv_matrix
            rgb_Hema[a][b] = np.exp(log_rgb42)
        else:           
            null = np.zeros_like(B5[:, :, 0])
            B5_A = np.stack((B5[:, :, 0], null, null), axis=-1)
            B5_B = np.stack((null, B5[:, :, 1], null), axis=-1)
            conv_matrix = X5
            log_rgb51 = B5_A[a][b] @ conv_matrix
            rgb_PDL1[a][b] = np.exp(log_rgb51)
            log_rgb42 = B5_B[a][b] @ conv_matrix
            rgb_Hema[a][b] = np.exp(log_rgb42)
        b = b + 1
        print(b)
        print(a)
    b = 0
    a = a + 1

fig, axes = plt.subplots(3, 2, figsize=(8, 7), sharex=True, sharey=True)
ax = axes.ravel()

ax[0].imshow(img)
ax[0].set_title("Original image")

rgb_Hema[rgb_Hema < 0] = 0
rgb_Hema[rgb_Hema > 1] = 1
ax[2].imshow(rgb_Hema)
ax[2].set_title("Hematoxylin")

rgb_PanCK[rgb_PanCK < 0] = 0
rgb_PanCK[rgb_PanCK > 1] = 1
ax[3].imshow(rgb_PanCK)
ax[3].set_title("PanCK")

rgb_PDL1[rgb_PDL1 < 0] = 0
rgb_PDL1[rgb_PDL1 > 1] = 1
ax[4].imshow(rgb_PDL1)
ax[4].set_title("PDL1")

rgb_CD8[rgb_CD8 < 0] = 0
rgb_CD8[rgb_CD8 > 1] = 1
ax[5].imshow(rgb_CD8)
ax[5].set_title("CD8")

for a in ax.ravel():
    a.axis('off')

fig.tight_layout()
plt.show()

gc.collect()

test9_2
test9_2_gl_it10

@mathurinm
Copy link
Owner

Hello,
Please don't quote my reply, it makes your message longer to read.

I don't understand what you mean by "But I found that group[3, 4, 5] in groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]] is always non-zero, so there's nothing in the image(CD8) after doing stain unmixing"

  • Did you try to change the value of the alpha parameter in GroupLasso to obtain different (non-zero) active groups ? Higher alpha gives fewer active groups, while smaller alphas should eventually result in all groups being active.
  • what does it mean 'there is nothing in the image after unmixing" ?

@SikangSHU
Copy link
Author

I mean that w_hat[3], w_hat[4] and w_hat[5] in my code is always non-zero, so it is always the situation “if w_hat[3] != 0” be used to unmix PanCK,PDL1 and Hema. Then CD8 is neglected. I have tried to change the value of the alpha parameter in GroupLasso, but w_hat[3], w_hat[4] and w_hat[5] always do contributions.

@mathurinm
Copy link
Owner

This is not a problem of the solver, but of the data you have. My guess is that the features in this group are strongly correlated to your response y, so the group Lasso model always includes them regardless of the regularization strength. You could try with other images to obtain different results (your image is not publicly available so I could not try).

Please be sure to cite/acknowledge our work if you end up using it in your application.
Best regards

@SikangSHU
Copy link
Author

OK. I will cite your work. Thank you for your patient answer!

@SikangSHU
Copy link
Author

SikangSHU commented Feb 22, 2022

Hello! I found that y = X @ gl.coef + gl.intercept_ . So what is the gl.intercept_? My ideal formula is y = X @ gl.coef and get gl.coef to do the following work.

@mathurinm
Copy link
Owner

Hi, the intercept is the constant term in the prediction function (also called bias). If you don't want to use it, you can set fit_intercept=False when instanciating your GroupLasso (it is set to True by default)

@SikangSHU
Copy link
Author

OK. I get it! Thank you.

@SikangSHU
Copy link
Author

Dear expert:
Hello! In my original image, PanCK is the yellow component, PDL1 is the red one, Hema is the blue one and CD8 is the black one. It seems that there is little black component in the original image, but I don't know why the solver shows that the feature CD8 (the first column in X = np.array([[0.571, 0.767, 0.095, 0.105], [0.584, 0.576, 0.258, 0.758], [0.577, 0.284, 0.961, 0.644]]) is the stain vector of CD8.) are strongly correlated to the response y and the result picture and there are many black patterns on the result image of CD8.
The result using the following program:
test9_4_dulifenzu
test image(Original image):
test9_4

@SikangSHU
Copy link
Author

import numpy as np
from numpy import linalg
import matplotlib.pyplot as plt
from skimage import io
from skimage.color.colorconv import _prepare_colorarray
from sklearn.metrics import mean_squared_error
import gc

from celer.homotopy import _alpha_max_grp
from celer import GroupLasso

img = io.imread('E:\\testpicture_jupyter\\4colour_unmixing\\test9_4.png')    
img1 = _prepare_colorarray(img, force_copy=True)    
np.maximum(img1, 1E-6, out=img1)   
y = np.log(img1)

X1 = np.array([[0.571, 0.584, 0.577], [0.095, 0.258, 0.961], [0.767, 0.576, 0.284]])  # CD8,PanCK,Hema
X1_inv = linalg.inv(X1)       
B1 = y @ X1_inv
y1 = B1 @ X1

X2 = np.array([[0.095, 0.258, 0.961], [0.105, 0.758, 0.644], [0.767, 0.576, 0.284]])  # PanCK,PD-L1,Hema
X2_inv = linalg.inv(X2)       
B2 = y @ X2_inv
y2 = B2 @ X2

X3 = np.array([[0.571, 0.584, 0.577], [0.767, 0.576, 0.284], [-0.48, 0.808, -0.343]])  # CD8,Hema
X3_inv = linalg.inv(X3)       
B3 = y @ X3_inv
y3 = B3 @ X3

X4 = np.array([[0.095, 0.258, 0.961], [0.767, 0.576, 0.284], [-0.553, 0.817, -0.165]])  # PanCK,Hema
X4_inv = linalg.inv(X4)       
B4 = y @ X4_inv
y4 = B4 @ X4

X5 = np.array([[0.105, 0.758, 0.644], [0.767, 0.576, 0.284], [-0.218, 0.649, -0.729]])  # PDL1,Hema
X5_inv = linalg.inv(X5)      
B5 = y @ X5_inv
y5 = B5 @ X5

# X = np.array([[0.571, 0.095, 0.767, 0.095, 0.105, 0.767, 0.571, 0.767, 0.095, 0.767, 0.105, 0.767],
#               [0.584, 0.258, 0.576, 0.258, 0.758, 0.576, 0.584, 0.576, 0.258, 0.576, 0.758, 0.576],
#               [0.577, 0.961, 0.284, 0.961, 0.644, 0.284, 0.577, 0.284, 0.961, 0.284, 0.644, 0.284]])
X = np.array([[0.571, 0.767, 0.095, 0.105],    # CD8, Hema, PanCK, PDL1
              [0.584, 0.576, 0.258, 0.758],
              [0.577, 0.284, 0.961, 0.644]])
X = np.asfortranarray(X)

# groups_celer = [[0, 1, 2], [3, 4, 5], [6, 7], [8, 9], [10, 11]]
groups_celer = [[0], [1], [2], [3]]

a = 0
b = 0
rgb_CD8 = np.zeros_like(y) + 1
rgb_PanCK = np.zeros_like(y) + 1
rgb_Hema = np.zeros_like(y) + 1
rgb_PDL1 = np.zeros_like(y) + 1

for i in y:
    for j in i:
        alpha_max = _alpha_max_grp(X, j, groups_celer, center=True)
        alpha = alpha_max * 0.5
        gl = GroupLasso(
            groups=groups_celer,
            alpha=alpha,
            max_iter=10,
            tol=1e-3,
            verbose=1,
            fit_intercept=False
        )
        gl.fit(X, j)
        yhat = gl.predict(X)
        w_hat = gl.coef_
        print("alpha_max:", alpha_max)
        print("w_hat:", w_hat)

        if w_hat[0] != 0:     # CD8
            log_CD8 = w_hat[0] * np.array([0.571, 0.584, 0.577])
            print("log_CD8:", log_CD8)
            rgb_CD8[a][b] = np.exp(log_CD8)

        if w_hat[1] != 0:    # Hema
            log_Hema = w_hat[1] * np.array([0.767, 0.576, 0.284])
            print("log_Hema:", log_Hema)
            rgb_Hema[a][b] = np.exp(log_Hema)

        if w_hat[2] != 0:  # PanCK
            log_PanCK = w_hat[2] * np.array([0.095, 0.258, 0.961])
            rgb_PanCK[a][b] = np.exp(log_PanCK)

        if w_hat[3] != 0:  # PDL1
            log_PDL1 = w_hat[3] * np.array([0.105, 0.758, 0.644])
            rgb_PDL1[a][b] = np.exp(log_PDL1)

        b = b + 1
        print(b)
        print(a)
    b = 0
    a = a + 1

fig, axes = plt.subplots(3, 2, figsize=(8, 7), sharex=True, sharey=True)
ax = axes.ravel()

ax[0].imshow(img)
ax[0].set_title("Original image")

rgb_Hema[rgb_Hema < 0] = 0
rgb_Hema[rgb_Hema > 1] = 1
ax[2].imshow(rgb_Hema)
ax[2].set_title("Hematoxylin")

rgb_PanCK[rgb_PanCK < 0] = 0
rgb_PanCK[rgb_PanCK > 1] = 1
ax[3].imshow(rgb_PanCK)
ax[3].set_title("PanCK")

rgb_PDL1[rgb_PDL1 < 0] = 0
rgb_PDL1[rgb_PDL1 > 1] = 1
ax[4].imshow(rgb_PDL1)
ax[4].set_title("PDL1")

rgb_CD8[rgb_CD8 < 0] = 0
rgb_CD8[rgb_CD8 > 1] = 1
ax[5].imshow(rgb_CD8)
ax[5].set_title("CD8")

for a in ax.ravel():
    a.axis('off')

fig.tight_layout()
plt.show()

@SikangSHU
Copy link
Author

Hello, I'd like to ask whether alpha in the program is lamda in the format as follows? Thank you!
image

import numpy as np
from numpy import linalg
import matplotlib.pyplot as plt
from skimage import io
from skimage.color.colorconv import _prepare_colorarray
from sklearn.metrics import mean_squared_error
import gc

from celer.homotopy import _alpha_max_grp
from celer import GroupLasso

img = io.imread('E:\\testpicture_jupyter\\4colour_unmixing\\test9_4.png')    
img1 = _prepare_colorarray(img, force_copy=True)   
np.maximum(img1, 1E-6, out=img1) 
y = np.log(img1)

X1 = np.array([[0.279, 0.552, 0.786], [0.095, 0.258, 0.961], [0.767, 0.576, 0.284]])  # CD8,PanCK,Hema
X1_inv = linalg.inv(X1)  
B1 = y @ X1_inv
y1 = B1 @ X1

X2 = np.array([[0.095, 0.258, 0.961], [0.105, 0.758, 0.644], [0.767, 0.576, 0.284]])  # PanCK,PD-L1,Hema
X2_inv = linalg.inv(X2)    
B2 = y @ X2_inv
y2 = B2 @ X2

X3 = np.array([[0.279, 0.552, 0.786], [0.767, 0.576, 0.284], [-0.451, 0.798, -0.4]])  # CD8,Hema
X3_inv = linalg.inv(X3) 
B3 = y @ X3_inv
y3 = B3 @ X3

X4 = np.array([[0.095, 0.258, 0.961], [0.767, 0.576, 0.284], [-0.553, 0.817, -0.165]])  # PanCK,Hema
X4_inv = linalg.inv(X4)   
B4 = y @ X4_inv
y4 = B4 @ X4

X5 = np.array([[0.105, 0.758, 0.644], [0.767, 0.576, 0.284], [-0.218, 0.649, -0.729]])  # PDL1,Hema
X5_inv = linalg.inv(X5)     
B5 = y @ X5_inv
y5 = B5 @ X5

X = np.array([[0.095, 0.279, 0.767, 0.095, 0.105],
              [0.258, 0.552, 0.576, 0.258, 0.758],
              [0.961, 0.786, 0.284, 0.961, 0.644]])

X = np.asfortranarray(X)

groups_celer = [[0, 1], [2], [3, 4]]


a = 0
b = 0
c = 0
d = 0
e = 0
f = 0
g = 0
rgb_CD8 = np.zeros_like(y) + 1
rgb_PanCK = np.zeros_like(y) + 1
rgb_Hema = np.zeros_like(y) + 1
rgb_PDL1 = np.zeros_like(y) + 1

for i in y:
    for j in i:
        alpha_max = _alpha_max_grp(X, j, groups_celer, center=True)
        alpha = alpha_max * 1
        gl = GroupLasso(
            groups=groups_celer,
            alpha=alpha,
            max_iter=10,
            tol=1e-3,
            verbose=1,
            fit_intercept=False
        )
        gl.fit(X, j)
        yhat = gl.predict(X)
        w_hat = gl.coef_
        print("alpha_max:", alpha_max)
        print("w_hat:", w_hat)
        print("yhat:", yhat)
        print("X @ w_hat:", X @ w_hat)
        print("j:", j)

        if w_hat[2] != 0:
            log_Hema = w_hat[2] * np.array([0.767, 0.576, 0.284])
            rgb_Hema[a][b] = np.exp(log_Hema)
            c = c + 1

        if w_hat[0] != 0:
            log_PanCK = w_hat[0] * np.array([0.095, 0.258, 0.961])
            log_CD8 = w_hat[1] * np.array([0.279, 0.552, 0.786])
            rgb_PanCK[a][b] = np.exp(log_PanCK)
            rgb_CD8[a][b] = np.exp(log_CD8)
            d = d + 1

        if w_hat[3] != 0:
            log_PanCK = w_hat[3] * np.array([0.095, 0.258, 0.961])
            log_PDL1 = w_hat[4] * np.array([0.105, 0.758, 0.644])
            rgb_PanCK[a][b] = np.exp(log_PanCK)
            rgb_PDL1[a][b] = np.exp(log_PDL1)
            e = e + 1

        b = b + 1
        print(b)
        print(a)
    b = 0
    a = a + 1

print("c, d, e, f, g:", c, d, e, f, g)

fig, axes = plt.subplots(3, 2, figsize=(9, 7), sharex=True, sharey=True)
ax = axes.ravel()

ax[0].imshow(img)
ax[0].set_title("Original image")

rgb_Hema[rgb_Hema < 0] = 0
rgb_Hema[rgb_Hema > 1] = 1
ax[2].imshow(rgb_Hema)
ax[2].set_title("Hematoxylin")

rgb_PanCK[rgb_PanCK < 0] = 0
rgb_PanCK[rgb_PanCK > 1] = 1
ax[3].imshow(rgb_PanCK)
ax[3].set_title("PanCK")

rgb_PDL1[rgb_PDL1 < 0] = 0
rgb_PDL1[rgb_PDL1 > 1] = 1
ax[4].imshow(rgb_PDL1)
ax[4].set_title("PDL1")

rgb_CD8[rgb_CD8 < 0] = 0
rgb_CD8[rgb_CD8 > 1] = 1
ax[5].imshow(rgb_CD8)
ax[5].set_title("CD8")

for a in ax.ravel():
    a.axis('off')

fig.tight_layout()
plt.show()

gc.collect()

@mathurinm
Copy link
Owner

Yes, our alpha is lambda in the formula you pasted (lambda is a reserved keyword in python, we could not use it though it is the typical name in the literature).

Beware that we don't have a scaling by sqrt(n_g). More refined weighting of the group penalty should be implemented in the weeks to come, se #220

You can refer to the first lines of the docstring of the GroupLasso class for a math formulation of the objective function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants