In [29]:
import pandas as pa
import numpy as np
import matplotlib.pyplot as plt
In [30]:
data = pa.read_csv('food_truck.csv')
data.head()
Out[30]:
Population(10.000) Profit(10.000 $)
0 6.1101 17.5920
1 5.5277 9.1302
2 8.5186 13.6620
3 7.0032 11.8540
4 5.8598 6.8233
In [31]:
X = data['Population(10.000)']
y = data['Profit(10.000 $)']

Regression lineaire (à la main)¶

In [32]:
x_bar = np.mean(X)
y_bar = np.mean(y)
m = len(X)
t1 = [(X[i] - x_bar) * (y[i] - y_bar) for i in range(m)]
t2 = [(X[i] - x_bar)**2 for i in range(m)]
b1 = np.sum(t1) / np.sum(t2)
b0 = y_bar - b1 * x_bar

y_hat = [b1 * X[i] + b0 for i in range(m)]
In [33]:
plt.grid()
plt.scatter(X, y)
plt.plot(X, y_hat, c='red') 
Out[33]:
[<matplotlib.lines.Line2D at 0xffff2f8591d0>]
No description has been provided for this image

Regression lineaire (avec Stats)¶

In [34]:
from scipy import stats
help(stats.linregress)
Help on function linregress in module scipy.stats._stats_mstats_common:

linregress(x, y=None, alternative='two-sided')
    Calculate a linear least-squares regression for two sets of measurements.
    
    Parameters
    ----------
    x, y : array_like
        Two sets of measurements.  Both arrays should have the same length.  If
        only `x` is given (and ``y=None``), then it must be a two-dimensional
        array where one dimension has length 2.  The two sets of measurements
        are then found by splitting the array along the length-2 dimension. In
        the case where ``y=None`` and `x` is a 2x2 array, ``linregress(x)`` is
        equivalent to ``linregress(x[0], x[1])``.
    alternative : {'two-sided', 'less', 'greater'}, optional
        Defines the alternative hypothesis. Default is 'two-sided'.
        The following options are available:
    
        * 'two-sided': the slope of the regression line is nonzero
        * 'less': the slope of the regression line is less than zero
        * 'greater':  the slope of the regression line is greater than zero
    
        .. versionadded:: 1.7.0
    
    Returns
    -------
    result : ``LinregressResult`` instance
        The return value is an object with the following attributes:
    
        slope : float
            Slope of the regression line.
        intercept : float
            Intercept of the regression line.
        rvalue : float
            The Pearson correlation coefficient. The square of ``rvalue``
            is equal to the coefficient of determination.
        pvalue : float
            The p-value for a hypothesis test whose null hypothesis is
            that the slope is zero, using Wald Test with t-distribution of
            the test statistic. See `alternative` above for alternative
            hypotheses.
        stderr : float
            Standard error of the estimated slope (gradient), under the
            assumption of residual normality.
        intercept_stderr : float
            Standard error of the estimated intercept, under the assumption
            of residual normality.
    
    See Also
    --------
    scipy.optimize.curve_fit :
        Use non-linear least squares to fit a function to data.
    scipy.optimize.leastsq :
        Minimize the sum of squares of a set of equations.
    
    Notes
    -----
    Missing values are considered pair-wise: if a value is missing in `x`,
    the corresponding value in `y` is masked.
    
    For compatibility with older versions of SciPy, the return value acts
    like a ``namedtuple`` of length 5, with fields ``slope``, ``intercept``,
    ``rvalue``, ``pvalue`` and ``stderr``, so one can continue to write::
    
        slope, intercept, r, p, se = linregress(x, y)
    
    With that style, however, the standard error of the intercept is not
    available.  To have access to all the computed values, including the
    standard error of the intercept, use the return value as an object
    with attributes, e.g.::
    
        result = linregress(x, y)
        print(result.intercept, result.intercept_stderr)
    
    Examples
    --------
    >>> import numpy as np
    >>> import matplotlib.pyplot as plt
    >>> from scipy import stats
    >>> rng = np.random.default_rng()
    
    Generate some data:
    
    >>> x = rng.random(10)
    >>> y = 1.6*x + rng.random(10)
    
    Perform the linear regression:
    
    >>> res = stats.linregress(x, y)
    
    Coefficient of determination (R-squared):
    
    >>> print(f"R-squared: {res.rvalue**2:.6f}")
    R-squared: 0.717533
    
    Plot the data along with the fitted line:
    
    >>> plt.plot(x, y, 'o', label='original data')
    >>> plt.plot(x, res.intercept + res.slope*x, 'r', label='fitted line')
    >>> plt.legend()
    >>> plt.show()
    
    Calculate 95% confidence interval on slope and intercept:
    
    >>> # Two-sided inverse Students t-distribution
    >>> # p - probability, df - degrees of freedom
    >>> from scipy.stats import t
    >>> tinv = lambda p, df: abs(t.ppf(p/2, df))
    
    >>> ts = tinv(0.05, len(x)-2)
    >>> print(f"slope (95%): {res.slope:.6f} +/- {ts*res.stderr:.6f}")
    slope (95%): 1.453392 +/- 0.743465
    >>> print(f"intercept (95%): {res.intercept:.6f}"
    ...       f" +/- {ts*res.intercept_stderr:.6f}")
    intercept (95%): 0.616950 +/- 0.544475

In [35]:
slope, intercept, r, p, se = stats.linregress(X, y)
In [21]:
y_hat_bis = slope * X + intercept
plt.grid()
plt.scatter(X, y)
plt.plot(X, y_hat, c='red') 
plt.plot(X, y_hat_bis, c='green') 
Out[21]:
[<matplotlib.lines.Line2D at 0xffff41812c10>]
No description has been provided for this image
In [36]:
R2 = r**2
print('R2: {:.2%}'.format(R2))
R2: 70.20%

70% de la variabilité du profit est expliquée par la population.

In [40]:
print('p_value: {:.2%}'.format(p))
print('p_value: ',p)
p_value: 0.00%
p_value:  1.0232099778760524e-26
In [ ]: