10 Commits

Author SHA1 Message Date
68dffa6486 feat(learning.ipynb): ajout meilleur algo 2026-03-28 20:04:14 +01:00
c7d2077b23 feat: ajout premier modele (1ere partie) 2026-03-28 19:58:09 +01:00
106877a073 feat: init Learning class and add drop_empty_price function 2026-03-28 15:51:46 +01:00
Loïc GUEZO
416cfcbf8b Add Python package ecosystem to Dependabot config
Configure Dependabot for Python package updates.
2026-03-27 22:11:53 +01:00
32c5310e37 fix: mettre à jour les tests pytest 2026-03-27 22:06:36 +01:00
9dfc7457a0 fix(scraper.py): retirer commentaire code et print 2026-03-27 22:06:06 +01:00
f5d5703e49 fix(scraper): recherche _getproduitslist actualisé
Suite à une refont de l'UI et du backend, la structure de données JSON envoyé par la page web a été simplifié.

Ancienne structure:

- `"props"->"pageProps"->"initialReduxState"->"categ"->"content->"produits"`

Nouvelle structure:

- `"props"->"pageProps"->"produits"`
2026-03-27 21:47:06 +01:00
888defb6b6 retravail: changement documentation et suppression version anglaise 2026-03-09 19:10:06 +01:00
734e3898e9 ajout: commencement écriture README.md 2026-03-09 14:35:57 +01:00
4bb3112dd0 ajout: creation de fichier out.csv main et ajout README 2026-03-09 14:16:05 +01:00
13 changed files with 585 additions and 32 deletions

18
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
day: "saturday"
open-pull-requests-limit: 5
groups:
python-dependencies:
patterns:
- "*"

View File

@@ -19,15 +19,15 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
- name: Set up Python 3.x
uses: actions/setup-python@v4
with:
python-version: "3.10"
python-version: "3.x"
- name: install dependencies
run: |
python -m pip install --upgrade pip
pip install ".[test,doc]"
pip install ".[test]"
- name: Lint with flake8
run: |

View File

@@ -32,15 +32,14 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python 3.10
- name: Set up Python 3.x
uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# Installe le projet en mode éditable avec les extras de doc
pip install -e ".[doc]"
- name: Setup Pages

View File

@@ -1 +1,37 @@
# millesima_projetS6
# Millesima AI Engine 🍷
> A **University of Paris-Est Créteil (UPEC)** Semester 6 project.
## Documentation
- 🇫🇷 [Version Française](https://guezoloic.github.io/millesima-ai-engine)
> note: only french version enabled for now.
---
## Installation
> Make sure you have **Python 3.10+** installed.
1. **Clone the repository:**
```bash
git clone https://github.com/votre-pseudo/millesima-ai-engine.git
cd millesima-ai-engine
```
2. **Set up a virtual environment:**
```bash
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
```
3. **Install dependencies:**
```bash
pip install -e .
```
## Usage
### 1. Data Extraction (Scraping)
To fetch the latest wine data from Millesima:
```bash
python3 src/scraper.py
```
> Note: that will take some time to fetch all data depending on the catalog size.

View File

@@ -1,3 +1,16 @@
# Millesima
Lobjectif de ce projet est détudier, en utilisant des méthodes dapprentissage automatique, limpact de différents critères (notes des critiques, appelation) sur le prix dun vin. Pour ce faire, on sappuiera sur le site Millesima (https://www.millesima.fr/), qui a lavantage de ne pas posséder de protection contre les bots. Par respect pour lhébergeur du site, on veillera à limiter au maximum le nombre de requêtes. En particulier, on sassurera davoir un code fonctionnel avant de scraper lintégralité du site, pour éviter les répétitions.
Lobjectif de ce projet est détudier, en utilisant des méthodes dapprentissage automatique, limpact de différents critères (notes des critiques, appelation) sur le prix dun vin. Pour ce faire, on sappuiera sur le site Millesima (https://www.millesima.fr/), qui a lavantage de ne pas posséder de protection contre les bots. Par respect pour lhébergeur du site, on veillera à limiter au maximum le nombre de requêtes. En particulier, on sassurera davoir un code fonctionnel avant de scraper lintégralité du site, pour éviter les répétitions.
## projet
<div style="text-align: center;">
<object
data="/millesima-ai-engine/projet.pdf"
type="application/pdf"
width="100%"
height="1000px"
>
<p>Votre navigateur ne peut pas afficher ce PDF.
<a href="/millesima-ai-engine/projet.pdf">Cliquez ici pour le télécharger.</a></p>
</object>
</div>

387
learning.ipynb Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
site_name: "Projet Millesima S6"
site_url: "https://github.guezoloic.com/millesima-ai-engine/"
theme:
name: "material"
@@ -7,6 +8,11 @@ plugins:
- search
- mkdocstrings
extra:
generator: false
copyright: "Loïc GUEZO & Chahrazad DAHMANI UPEC S6 2026"
markdown_extensions:
- admonition
- pymdownx.details

View File

@@ -6,8 +6,14 @@ dependencies = [
"beautifulsoup4==4.14.3",
"pandas==2.3.3",
"tqdm==4.67.3",
"scikit-learn==1.7.2",
"matplotlib==3.10.8"
]
[tool.pytest.ini_options]
pythonpath = "src"
testpaths = ["tests"]
[project.optional-dependencies]
test = ["pytest==8.4.2", "requests-mock==1.12.1", "flake8==7.3.0"]
doc = ["mkdocs<2.0.0", "mkdocs-material==9.6.23", "mkdocstrings[python]"]

View File

@@ -92,14 +92,24 @@ class Cleaning:
self._vins = self._vins.join(appellation_dummies)
return self
def drop_empty_price(self) -> "Cleaning":
self._vins = self._vins.dropna(subset=["Prix"])
return self
def main() -> None:
if len(argv) != 2:
raise ValueError(f"Usage: {argv[0]} <filename.csv>")
filename = argv[1]
cleaning: Cleaning = Cleaning(filename)
_ = cleaning.drop_empty_appellation().fill_missing_scores().encode_appellation()
cleaning: Cleaning = (
Cleaning(filename)
.drop_empty_appellation()
.fill_missing_scores()
.encode_appellation()
.drop_empty_price()
)
cleaning.getVins().to_csv("clean.csv", index=False)
if __name__ == "__main__":

93
src/learning.py Executable file
View File

@@ -0,0 +1,93 @@
from typing import Any, Callable
from pandas import DataFrame
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
import matplotlib.pyplot as plt
from cleaning import Cleaning
class Learning:
def __init__(self, vins: DataFrame, target: str) -> None:
self.X = vins.drop(target, axis=1)
self.y = vins[target]
self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(
self.X, self.y, test_size=0.25, random_state=49
)
def evaluate(
self,
estimator,
pretreatment=None,
fn_score=lambda m, xt, yt: m.score(xt, yt),
):
pipeline = make_pipeline(pretreatment, estimator) if pretreatment else estimator
pipeline.fit(self.X_train, self.y_train)
score = fn_score(pipeline, self.X_test, self.y_test)
prediction = pipeline.predict(self.X_test)
return score, prediction
def draw(self, predictions, y_actual):
plt.figure(figsize=(8, 6))
plt.scatter(
predictions,
y_actual,
alpha=0.5,
c="royalblue",
edgecolors="k",
label="Vins",
)
mn = min(predictions.min(), y_actual.min())
mx = max(predictions.max(), y_actual.max())
plt.plot(
[mn, mx],
[mn, mx],
color="red",
linestyle="--",
lw=2,
label="Prédiction Parfaite",
)
plt.xlabel("Prix estimés (estim_LR)")
plt.ylabel("Prix réels (y_test)")
plt.title("titre")
plt.legend()
plt.grid(True, linestyle=":", alpha=0.6)
plt.show()
df_vins = (
Cleaning("data.csv")
.drop_empty_appellation()
.fill_missing_scores()
.encode_appellation()
.drop_empty_price()
.getVins()
)
etude = Learning(df_vins, target="Prix")
print("--- Question 16 & 17 ---")
score_simple, estim_simple = etude.evaluate(LinearRegression())
print(f"Score R² (LR Simple) : {score_simple:.4f}")
etude.draw(estim_simple, etude.y_test)
print("\n--- Question 18 ---")
score_std, estim_std = etude.evaluate(
estimator=LinearRegression(), pretreatment=StandardScaler()
)
print(f"Score R² (Standardisation + LR) : {score_std:.4f}")
etude.draw(estim_std, etude.y_test)

View File

@@ -377,9 +377,6 @@ class Scraper:
try:
data: dict[str, object] = self.getjsondata(subdir).getdata()
for element in ["initialReduxState", "categ", "content"]:
data = cast(dict[str, object], data.get(element))
products: list[dict[str, Any]] = cast(
list[dict[str, Any]], data.get("products")
)

View File

@@ -185,17 +185,11 @@ def mock_site():
{dumps({
"props": {
"pageProps": {
"initialReduxState": {
"categ": {
"content": {
"products": [
{"seoKeyword": "/nino-negri-5-stelle-sfursat-2022.html",},
{"seoKeyword": "/poubelle",},
{"seoKeyword": "/",}
]
}
}
}
"products": [
{"seoKeyword": "/nino-negri-5-stelle-sfursat-2022.html",},
{"seoKeyword": "/poubelle",},
{"seoKeyword": "/",}
]
}
}
}
@@ -213,14 +207,8 @@ def mock_site():
{dumps({
"props": {
"pageProps": {
"initialReduxState": {
"categ": {
"content": {
"products": [
]
}
}
}
"products": [
]
}
}
}