3 Commits

Author SHA1 Message Date
DAHMANI chahrazad
0a3561ffaa Fix comments for clarity in main.py 2026-02-07 02:25:31 +01:00
Chahrazad650
6366fcd8dd ajout fonction prix 2026-02-07 02:21:30 +01:00
Chahrazad650
5797b72cbc juste un test 2026-02-05 18:28:07 +01:00
19 changed files with 171 additions and 1243 deletions

View File

@@ -1,18 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
day: "saturday"
open-pull-requests-limit: 5
groups:
python-dependencies:
patterns:
- "*"

View File

@@ -5,34 +5,35 @@ name: Python application
on:
push:
branches: ["main"]
branches: [ "main" ]
pull_request:
branches: ["main"]
branches: [ "main" ]
permissions:
contents: write
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: install dependencies
run: |
python -m pip install --upgrade pip
pip install ".[test,doc]"
- name: Lint with flake8
run: |
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: pytest
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest

View File

@@ -1,58 +0,0 @@
# Simple workflow for deploying static content to GitHub Pages
name: Deploy static content to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Single deploy job since we're just deploying
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
# Installe le projet en mode éditable avec les extras de doc
pip install -e ".[doc]"
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Build Documentation
run: mkdocs build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
# Upload entire repository
path: './site'
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

2
.gitignore vendored
View File

@@ -205,5 +205,3 @@ cython_debug/
marimo/_static/
marimo/_lsp/
__marimo__/
*.csv

View File

@@ -1,37 +1 @@
# Millesima AI Engine 🍷
> A **University of Paris-Est Créteil (UPEC)** Semester 6 project.
## Documentation
- 🇫🇷 [Version Française](https://guezoloic.github.io/millesima-ai-engine)
> note: only french version enabled for now.
---
## Installation
> Make sure you have **Python 3.10+** installed.
1. **Clone the repository:**
```bash
git clone https://github.com/votre-pseudo/millesima-ai-engine.git
cd millesima-ai-engine
```
2. **Set up a virtual environment:**
```bash
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
```
3. **Install dependencies:**
```bash
pip install -e .
```
## Usage
### 1. Data Extraction (Scraping)
To fetch the latest wine data from Millesima:
```bash
python3 src/scraper.py
```
> Note: that will take some time to fetch all data depending on the catalog size.
# millesima_projetS6

View File

@@ -1,17 +0,0 @@
# Cleaning
## Sommaire
[TOC]
---
## Classe `Cleaning`
::: src.cleaning.Cleaning
options:
heading_level: 3
members:
- __init__
- getVins
- drop_empty_appellation
- fill_missing_scores
- encode_appellation

View File

@@ -1,16 +0,0 @@
# Millesima
Lobjectif de ce projet est détudier, en utilisant des méthodes dapprentissage automatique, limpact de différents critères (notes des critiques, appelation) sur le prix dun vin. Pour ce faire, on sappuiera sur le site Millesima (https://www.millesima.fr/), qui a lavantage de ne pas posséder de protection contre les bots. Par respect pour lhébergeur du site, on veillera à limiter au maximum le nombre de requêtes. En particulier, on sassurera davoir un code fonctionnel avant de scraper lintégralité du site, pour éviter les répétitions.
## projet
<div style="text-align: center;">
<object
data="/millesima-ai-engine/projet.pdf"
type="application/pdf"
width="100%"
height="1000px"
>
<p>Votre navigateur ne peut pas afficher ce PDF.
<a href="/millesima-ai-engine/projet.pdf">Cliquez ici pour le télécharger.</a></p>
</object>
</div>

View File

@@ -1,31 +0,0 @@
# Scraper
## Sommaire
[TOC]
---
## Classe `Scraper`
::: scraper.Scraper
options:
members:
- __init__
- getvins
- getjsondata
- getresponse
- getsoup
heading_level: 4
## Classe `_ScraperData`
::: scraper._ScraperData
options:
members:
- __init__
- getdata
- appellation
- parker
- robinson
- suckling
- prix
- informations
heading_level: 4

110
main.py Normal file
View File

@@ -0,0 +1,110 @@
from sys import stderr
from typing import cast, Any, Dict, Optional
from requests import Response, Session
from bs4 import BeautifulSoup, Tag
from json import JSONDecodeError, loads
class Scraper:
"""
Scraper est une classe qui permet de gerer
de façon dynamique des requetes uniquement
sur le serveur https de Millesima
"""
def __init__(self) -> None:
self._url: str = "https://www.millesima.fr/"
self._session: Session = Session()
self._latest_request: tuple[(str, Response | None)] = ("", None)
def _request(self, subdir: str) -> Response:
target_url: str = self._url + subdir.lstrip("/")
response: Response = self._session.get(url=target_url, timeout=10)
response.raise_for_status()
return response
def getresponse(self, subdir: str = "") -> Response:
rq_subdir, rq_response = self._latest_request
if rq_response is None or subdir != rq_subdir:
request: Response = self._request(subdir)
self._latest_request = (subdir, request)
return request
return rq_response
def getsoup(self, subdir: str = "") -> BeautifulSoup:
markup: str = self.getresponse(subdir).text
return BeautifulSoup(markup, features="html.parser")
def getjsondata(self, subdir: str = "", id: str = "__NEXT_DATA__") -> dict[str, object]:
soup: BeautifulSoup = self.getsoup(subdir)
script: Tag | None = soup.find("script", id=id)
if isinstance(script, Tag) and script.string:
try:
current_data: object = loads(script.string)
keys: list[str] = ["props", "pageProps", "initialReduxState", "product", "content"]
for key in keys:
if isinstance(current_data, dict) and key in current_data:
current_data = current_data[key]
else:
raise ValueError(f"Clé manquante dans le JSON : {key}")
if isinstance(current_data, dict):
return cast(dict[str, object], current_data)
except (JSONDecodeError, ValueError) as e:
print(f"Erreur lors de l'extraction JSON : {e}", file=stderr)
return {}
def prix(self, subdir: str) -> float:
"""
Retourne le prix d'une bouteille (75cl).
Les données récupérées depuis le site contiennent plusieurs formats
de vente dans la liste "items" :
- bouteille seule si nbunit=1 et equivbtl=1
-> prix direct (format vendu à l'unité).
- caisse de plusieurs bouteilles si nbunit=1
-> prix direct (format vendu à l'unité).
- formats spéciaux (magnum, impériale, etc.)sinon
-> calcul du prix unitaire : offerPrice / (nbunit * equivbtl)
Chaque item possède notamment :
- offerPrice : prix total du format proposé
- nbunit : nombre d'unités dans le format
- equivbtl : équivalent en nombre de bouteilles standard (75cl)
"""
data = self.getjsondata(subdir)
items = data.get("items")
if not isinstance(items, list) or len(items) == 0:
raise ValueError("Aucun prix disponible (items vide).")
# 1) bouteille 75cl (nbunit=1 et equivbtl=1)
for item in items:
if not isinstance(item, dict):
continue
attrs = item.get("attributes", {})
nbunit = attrs.get("nbunit", {}).get("value")
equivbtl = attrs.get("equivbtl", {}).get("value")
if nbunit == "1" and equivbtl == "1":
p = item.get("offerPrice")
if isinstance(p, (int, float)):
return float(p)
# 2) calcul depuis caisse
for item in items:
if not isinstance(item, dict):
continue
p = item.get("offerPrice")
attrs = item.get("attributes", {})
nbunit = attrs.get("nbunit", {}).get("value")
equivbtl = attrs.get("equivbtl", {}).get("value")
if isinstance(p, (int, float)) and nbunit and equivbtl:
denom = float(nbunit) * float(equivbtl)
if denom > 0:
return round(float(p) / denom, 2)
raise ValueError("Impossible de trouver le prix unitaire.")

View File

@@ -1,20 +0,0 @@
site_name: "Projet Millesima S6"
site_url: "https://github.guezoloic.com/millesima-ai-engine/"
theme:
name: "material"
plugins:
- search
- mkdocstrings
extra:
generator: false
copyright: "Loïc GUEZO & Chahrazad DAHMANI UPEC S6 2026"
markdown_extensions:
- admonition
- pymdownx.details
- pymdownx.superfences
- pymdownx.tabbed

Binary file not shown.

View File

@@ -1,21 +0,0 @@
[project]
name = "projet-millesima-s6"
version = "0.1.0"
dependencies = [
"requests==2.33.0",
"beautifulsoup4==4.14.3",
"pandas==3.0.1",
"tqdm==4.67.3",
]
[tool.pytest.ini_options]
pythonpath = "src"
testpaths = ["tests"]
[project.optional-dependencies]
test = ["pytest==9.0.2", "requests-mock==1.12.1", "flake8==7.3.0"]
doc = ["mkdocs<2.0.0", "mkdocs-material==9.7.6", "mkdocstrings[python]"]
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

2
requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
requests>=2.32.5
beautifulsoup4>=4.14.3

BIN
res/projet1.pdf Normal file

Binary file not shown.

View File

@@ -1,113 +0,0 @@
#!/usr/bin/env python3
from os import getcwd
from os.path import normpath, join
from typing import cast
from pandas import DataFrame, read_csv, to_numeric, get_dummies
from sys import argv
def path_filename(filename: str) -> str:
return normpath(join(getcwd(), filename))
class Cleaning:
def __init__(self, filename) -> None:
self._vins: DataFrame = read_csv(filename)
# créer la liste de tout les scores
self.SCORE_COLS: list[str] = [
c for c in self._vins.columns if c not in ["Appellation", "Prix"]
]
# transforme tout les colonnes score en numérique
for col in self.SCORE_COLS:
self._vins[col] = to_numeric(self._vins[col], errors="coerce")
def getVins(self) -> DataFrame:
return self._vins.copy(deep=True)
def __str__(self) -> str:
"""
Affiche un résumé du DataFrame
- la taille
- types des colonnes
- valeurs manquantes
- statistiques numériques
"""
return (
f"Shape : {self._vins.shape[0]} lignes x {self._vins.shape[1]} colonnes\n\n"
f"Types des colonnes :\n{self._vins.dtypes}\n\n"
f"Valeurs manquantes :\n{self._vins.isna().sum()}\n\n"
f"Statistiques numériques :\n{self._vins.describe().round(2)}\n\n"
)
def drop_empty_appellation(self) -> "Cleaning":
self._vins = self._vins.dropna(subset=["Appellation"])
return self
def _mean_score(self, col: str) -> DataFrame:
"""
Calcule la moyenne d'une colonne de score par appellation.
- Convertit les valeurs en numériques, en remplaçant les non-convertibles par NaN
- Calcule la moyenne par appellation
- Remplace les NaN résultants par 0
"""
means = self._vins.groupby("Appellation", as_index=False)[col].mean()
means = means.rename(
columns={col: f"mean_{col}"}
) # pyright: ignore[reportCallIssue]
return cast(DataFrame, means.fillna(0))
def _mean_robert(self) -> DataFrame:
return self._mean_score("Robert")
def _mean_robinson(self) -> DataFrame:
return self._mean_score("Robinson")
def _mean_suckling(self) -> DataFrame:
return self._mean_score("Suckling")
def fill_missing_scores(self) -> "Cleaning":
"""
Remplacer les notes manquantes par la moyenne
des vins de la même appellation.
"""
for element in self.SCORE_COLS:
means = self._mean_score(element)
self._vins = self._vins.merge(means, on="Appellation", how="left")
mean_col = f"mean_{element}"
self._vins[element] = self._vins[element].fillna(self._vins[mean_col])
self._vins = self._vins.drop(columns=["mean_" + element])
return self
def encode_appellation(self, column: str = "Appellation") -> "Cleaning":
"""
Remplace la colonne 'Appellation' par des colonnes indicatrices
"""
appellations = self._vins[column].astype(str).str.strip()
appellation_dummies = get_dummies(appellations, prefix="App")
self._vins = self._vins.drop(columns=[column])
self._vins = self._vins.join(appellation_dummies)
return self
def main() -> None:
if len(argv) != 2:
raise ValueError(f"Usage: {argv[0]} <filename.csv>")
filename = argv[1]
cleaning: Cleaning = Cleaning(filename)
cleaning.drop_empty_appellation() \
.fill_missing_scores() \
.encode_appellation() \
.getVins() \
.to_csv("clean.csv", index=False)
if __name__ == "__main__":
try:
main()
except Exception as e:
print(f"ERREUR: {e}")

View File

@@ -1,507 +0,0 @@
#!/usr/bin/env python3
from collections import OrderedDict
from io import SEEK_END, SEEK_SET, BufferedWriter, TextIOWrapper
from json import JSONDecodeError, loads
from os import makedirs
from os.path import dirname, exists, join, normpath, realpath
from pickle import UnpicklingError, dump, load
from sys import argv
from tqdm.std import tqdm
from typing import Any, Callable, Literal, TypeVar, cast
from bs4 import BeautifulSoup, Tag
from requests import HTTPError, Response, Session
_dir: str = dirname(realpath(__name__))
T = TypeVar("T")
def _getcache(mode: Literal["rb", "wb"], fn: Callable[[Any], T]) -> T | None:
"""_summary_
Returns:
_type_: _description_
"""
cache_dirname = normpath(join(_dir, ".cache"))
save_path = normpath(join(cache_dirname, "save"))
if not exists(cache_dirname):
makedirs(cache_dirname)
try:
with open(save_path, mode) as f:
return fn(f)
except (FileNotFoundError, EOFError, UnpicklingError):
return None
def savestate(data: tuple[int, set[str]]) -> None:
def save(f: BufferedWriter) -> None:
_ = f.seek(0)
_ = f.truncate()
dump(data, f)
f.flush()
_getcache("wb", save)
def loadstate() -> tuple[int, set[str]] | None:
return _getcache("rb", lambda f: load(f))
class _ScraperData:
"""
Conteneur de données spécialisé pour extraire les informations des dictionnaires JSON.
Cette classe agit comme une interface simplifiée au-dessus du dictionnaire brut
renvoyé par la balise __NEXT_DATA__ du site Millesima.
"""
def __init__(self, data: dict[str, object]) -> None:
"""
Initialise le conteneur avec un dictionnaire de données.
Args:
data (dict[str, object]): Le dictionnaire JSON brut extrait de la page.
"""
self._data: dict[str, object] = data
def _getcontent(self) -> dict[str, object] | None:
"""
Navigue dans l'arborescence Redux pour atteindre le contenu du produit.
Returns:
dict[str, object] | None: Le dictionnaire du produit ou None si la structure diffère.
"""
current_data: dict[str, object] = self._data
for key in ["initialReduxState", "product", "content"]:
new_data: object | None = current_data.get(key)
if new_data is None:
return None
current_data: dict[str, object] = cast(dict[str, object], new_data)
return current_data
def _getattributes(self) -> dict[str, object] | None:
"""
Extrait les attributs techniques (notes, appellations, etc.) du produit.
Returns:
dict[str, object] | None: Les attributs du vin ou None.
"""
current_data: object = self._getcontent()
if current_data is None:
return None
return cast(dict[str, object], current_data.get("attributes"))
def prix(self) -> float | None:
"""
Calcule le prix unitaire d'une bouteille (standardisée à 75cl).
Le site vend souvent par caisses (6, 12 bouteilles) ou formats (Magnum).
Cette méthode normalise le prix pour obtenir celui d'une seule unité.
Returns:
float | None: Le prix calculé arrondi à 2 décimales, ou None.
"""
content = self._getcontent()
if content is None:
return None
items = content.get("items")
# Vérifie que items existe et n'est pas vide
if not isinstance(items, list) or len(items) == 0:
return None
prix_calcule: float | None = None
for item in items:
if not isinstance(item, dict):
continue
p = item.get("offerPrice")
attrs = item.get("attributes", {})
nbunit = attrs.get("nbunit", {}).get("value")
equivbtl = attrs.get("equivbtl", {}).get("value")
if not isinstance(p, (int, float)) or not nbunit or not equivbtl:
continue
nb = float(nbunit)
eq = float(equivbtl)
if nb <= 0 or eq <= 0:
continue
if nb == 1 and eq == 1:
return float(p)
prix_calcule = round(float(p) / (nb * eq), 2)
return prix_calcule
def appellation(self) -> str | None:
"""
Extrait le nom de l'appellation du vin.
Returns:
str | None: Le nom (ex: 'Pauillac') ou None.
"""
attrs: dict[str, object] | None = self._getattributes()
if attrs is not None:
app_dict: object | None = attrs.get("appellation")
if isinstance(app_dict, dict):
return cast(str, app_dict.get("value"))
return None
def _getcritiques(self, name: str) -> str | None:
"""
Méthode générique pour parser les notes des critiques (Parker, Suckling, etc.).
Gère les notes simples ("95") et les plages de notes ("95-97") en faisant la moyenne.
Args:
name (str): La clé de l'attribut dans le JSON (ex: 'note_rp').
Returns:
str | None: La note formatée en chaîne de caractères ou None.
"""
current_value: dict[str, object] | None = self._getattributes()
if current_value is not None:
app_dict: dict[str, object] = cast(
dict[str, object], current_value.get(name)
)
if not app_dict:
return None
val = cast(str, app_dict.get("value")).rstrip("+").split("-")
if len(val) > 1 and val[1] != "":
val[0] = str(round((float(val[0]) + float(val[1])) / 2, 1))
return val[0]
return None
def parker(self) -> str | None:
"""Note Robert Parker."""
return self._getcritiques("note_rp")
def robinson(self) -> str | None:
"""Note Jancis Robinson."""
return self._getcritiques("note_jr")
def suckling(self) -> str | None:
"""Note James Suckling."""
return self._getcritiques("note_js")
def getdata(self) -> dict[str, object]:
"""Retourne le dictionnaire de données complet."""
return self._data
def informations(self) -> str:
"""
Agrège les données clés pour l'export CSV.
Returns:
str: Ligne formatée : "Appellation,Parker,Robinson,Suckling,Prix".
"""
appellation = self.appellation()
parker = self.parker()
robinson = self.robinson()
suckling = self.suckling()
prix = self.prix()
prix = self.prix()
return f"{appellation},{parker},{robinson},{suckling},{prix}"
class Scraper:
"""
Client HTTP optimisé pour le scraping de millesima.fr.
Gère la session persistante, les headers de navigation et un cache double
pour optimiser les performances et la discrétion.
"""
def __init__(self) -> None:
"""
Initialise l'infrastructure de navigation:
- créer une session pour éviter de faire un handshake pour chaque requête
- ajout d'un header pour éviter le blocage de l'accès au site
- ajout d'un système de cache
"""
self._url: str = "https://www.millesima.fr/"
# Très utile pour éviter de renvoyer toujours les mêmes handshake
# TCP et d'avoir toujours une connexion constante avec le server
self._session: Session = Session()
# Crée une "fausse carte d'identité" pour éviter que le site nous
# bloque car on serait des robots
self._session.headers.update(
{
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) \
Chrome/122.0.0.0 Safari/537.36",
"Accept-Language": "fr-FR,fr;q=0.9,en;q=0.8",
}
)
# Système de cache pour éviter de solliciter le serveur inutilement
# utilise pour _request
self._latest_request: tuple[(str, Response)] | None = None
# utilise pour getsoup
self._latest_soups: OrderedDict[str, BeautifulSoup] = OrderedDict[
str, BeautifulSoup
]()
def _request(self, subdir: str) -> Response:
"""
Effectue une requête GET sur le serveur Millesima.
Args:
subdir (str): Le sous-répertoire ou chemin de l'URL (ex: "/vins").
Returns:
Response: L'objet réponse de la requête.
Raises:
HTTPError: Si le serveur renvoie un code d'erreur (4xx, 5xx).
"""
target_url: str = self._url + subdir.lstrip("/")
# envoyer une requête GET sur la page si erreur, renvoie un raise
response: Response = self._session.get(url=target_url, timeout=30)
response.raise_for_status()
return response
def getresponse(self, subdir: str = "", use_cache: bool = True) -> Response:
"""
Récupère la réponse d'une page, en utilisant le cache si possible.
Args:
subdir (str, optional): Le chemin de la page.
use_cache (bool, optional): Utilise la donnée deja sauvegarder ou
écrase la donnée utilisé avec la nouvelle
Returns:
Response: L'objet réponse (cache ou nouvelle requête).
Raises:
HTTPError: Si le serveur renvoie un code d'erreur (4xx, 5xx).
"""
# si dans le cache, latest_request existe
if use_cache and self._latest_request is not None:
rq_subdir, rq_response = self._latest_request
# si c'est la meme requete et que use_cache est true,
# on renvoie celle enregistrer
if subdir == rq_subdir:
return rq_response
request: Response = self._request(subdir)
# on recrée la structure pour le systeme de cache si activer
if use_cache:
self._latest_request = (subdir, request)
return request
def getsoup(self, subdir: str, use_cache: bool = True) -> BeautifulSoup:
"""
Récupère le contenu HTML d'une page et le transforme en objet BeautifulSoup.
Args:
subdir (str, optional): Le chemin de la page.
Returns:
BeautifulSoup: L'objet parsé pour extraction de données.
Raises:
HTTPError: Si le serveur renvoie un code d'erreur (4xx, 5xx).
"""
if use_cache and subdir in self._latest_soups:
return self._latest_soups[subdir]
markup: str = self.getresponse(subdir).text
soup: BeautifulSoup = BeautifulSoup(markup, features="html.parser")
if use_cache:
self._latest_soups[subdir] = soup
if len(self._latest_soups) > 10:
_ = self._latest_soups.popitem(last=False)
return soup
def getjsondata(self, subdir: str, id: str = "__NEXT_DATA__") -> _ScraperData:
"""
Extrait les données JSON contenues dans la balise __NEXT_DATA__ du site.
Args:
subdir (str): Le chemin de la page.
id (str, optional): L'identifiant de la balise script.
Raises:
HTTPError: Erreur renvoyée par le serveur (4xx, 5xx).
JSONDecodeError: Si le contenu de la balise n'est pas un JSON valide.
ValueError: Si les clés 'props' ou 'pageProps' sont absentes.
Returns:
_ScraperData: Instance contenant les données extraites.
"""
soup: BeautifulSoup = self.getsoup(subdir)
script: Tag | None = soup.find("script", id=id)
if script is None or not script.string:
raise ValueError(f"le script id={id} est introuvable")
current_data: object = cast(object, loads(script.string))
for key in ["props", "pageProps"]:
if isinstance(current_data, dict) and key in current_data:
current_data = cast(object, current_data[key])
continue
raise ValueError(f"Clé manquante dans le JSON : {key}")
return _ScraperData(cast(dict[str, object], current_data))
def _geturlproductslist(self, subdir: str) -> list[dict[str, Any]] | None:
"""
Récupère la liste des produits d'une page de catégorie.
"""
try:
data: dict[str, object] = self.getjsondata(subdir).getdata()
products: list[dict[str, Any]] = cast(
list[dict[str, Any]], data.get("products")
)
return products
except (JSONDecodeError, HTTPError):
return None
def _writevins(self, cache: set[str], product: dict[str, Any], f: Any) -> None:
"""_summary_
Args:
cache (set[str]): _description_
product (dict): _description_
f (Any): _description_
"""
if isinstance(product, dict):
link: Any | None = product.get("seoKeyword")
if link and link not in cache:
try:
infos = self.getjsondata(link).informations()
_ = f.write(infos + "\n")
cache.add(link)
except (JSONDecodeError, HTTPError) as e:
print(f"Erreur sur le produit {link}: {e}")
def _initstate(self, reset: bool) -> tuple[int, set[str]]:
"""
appelle la fonction pour load le cache, si il existe
pas, il utilise les variables de base sinon il override
toute les variables pour continuer et pas recommencer le
processus en entier.
Args:
reset (bool): pouvoir le reset ou pas
Returns:
tuple[int, set[str]]: le contenu de la page et du cache
"""
if not reset:
#
serializable: tuple[int, set[str]] | None = loadstate()
if isinstance(serializable, tuple):
return serializable
return 1, set()
def _ensuretitle(self, f: TextIOWrapper, title: str) -> None:
"""
check si le titre est bien présent au début du buffer
sinon il l'ecrit, petit bug potentiel, a+ ecrit tout le
temps a la fin du buffer, si on a ecrit des choses avant
le titre sera apres ces données mais on part du principe
que personne va toucher le fichier.
Args:
f (TextIOWrapper): buffer stream fichier
title (str): titre du csv
"""
_ = f.seek(0, SEEK_SET)
if not (f.read(len(title)) == title):
_ = f.write(title)
else:
_ = f.seek(0, SEEK_END)
def getvins(self, subdir: str, filename: str, reset: bool = False) -> None:
"""
Scrape toutes les pages d'une catégorie et sauvegarde en CSV.
Args:
subdir (str): La catégorie (ex: '/vins-rouges').
filename (str): Nom du fichier de sortie (ex: 'vins.csv').
reset (bool): (Optionnel) pour réinitialiser le processus.
"""
# mode d'écriture fichier
mode: Literal["w", "a+"] = "w" if reset else "a+"
# titre
title: str = "Appellation,Robert,Robinson,Suckling,Prix\n"
# page: page où commence le scraper
# cache: tout les pages déjà parcourir
page, cache = self._initstate(reset)
try:
with open(filename, mode) as f:
self._ensuretitle(f, title)
while True:
products_list: list[dict[str, Any]] | None = (
self._geturlproductslist(f"{subdir}?page={page}")
)
if not products_list:
break
pbar: tqdm[dict[str, Any]] = tqdm(
products_list, bar_format="{l_bar} {bar:20} {r_bar}"
)
for product in pbar:
keyword: str = cast(
str, product.get("seoKeyword", "Inconnu")[:40]
)
pbar.set_description(
f"Page: {page:<3} | Product: {keyword:<40}"
)
self._writevins(cache, product, f)
page += 1
# va créer un fichier au début et l'override
# tout les 5 pages au cas où SIGHUP ou autre
if page % 5 == 0 and not reset:
savestate((page, cache))
except (Exception, HTTPError, KeyboardInterrupt, JSONDecodeError):
if not reset:
savestate((page, cache))
def main() -> None:
if len(argv) != 3:
raise ValueError(f"{argv[0]} <filename> <sous-url>")
filename = argv[1]
suburl = argv[2]
scraper: Scraper = Scraper()
scraper.getvins(suburl, filename)
if __name__ == "__main__":
try:
main()
except Exception as e:
print(f"ERREUR: {e}")

35
test_main.py Normal file
View File

@@ -0,0 +1,35 @@
import json
from main import Scraper
def test_json():
scraper = Scraper()
data = scraper.getjsondata("/chateau-gloria-2016.html")
print("JSON récupéré :")
print(json.dumps(data, indent=4, ensure_ascii=False))
assert isinstance(data, dict)
assert "items" in data
def test_prix():
scraper = Scraper()
try:
p = scraper.prix("/chateau-saint-pierre-2011.html")
print("Prix unitaire =", p)
assert isinstance(p, float)
assert p > 0
except ValueError:
# le vin n'est pas disponible à la vente
print("OK : aucun prix (vin indisponible, items vide)")
if __name__ == "__main__":
test_json()
test_prix()
print("\nTous les tests terminés")

View File

@@ -1,67 +0,0 @@
import pytest
from unittest.mock import patch, mock_open
from cleaning import Cleaning
@pytest.fixture
def cleaning_raw() -> Cleaning:
"""
"Appellation": ["Pauillac", "Pauillac ", "Margaux", None , "Pomerol", "Pomerol"],
"Robert": ["95" , None , "bad" , 90 , None , None ],
"Robinson": [None , "93" , 18 , None , None , None ],
"Suckling": [96 , None , None , None , 91 , None ],
"Prix": ["10.0" , "11.0" , "20.0" , "30.0", "40.0" , "50.0" ],
"""
csv_content = """Appellation,Robert,Robinson,Suckling,Prix
Pauillac,95,,96,10.0
Pauillac ,,93,,11.0
Margaux,bad,18,,20.0
,90,,,30.0
Pomerol,,,91,40.0
Pomerol,,,,50.0
"""
m = mock_open(read_data=csv_content)
with patch("builtins.open", m):
return Cleaning("donnee.csv")
def test_drop_empty_appellation(cleaning_raw: Cleaning) -> None:
out = cleaning_raw.drop_empty_appellation().getVins()
assert out["Appellation"].isna().sum() == 0
assert len(out) == 5
def test_mean_score_zero_when_no_scores(cleaning_raw: Cleaning) -> None:
out = cleaning_raw.drop_empty_appellation()
m = out._mean_score("Robert")
assert list(m.columns) == ["Appellation", "mean_Robert"]
pomerol_mean = m.loc[m["Appellation"].str.strip() == "Pomerol", "mean_Robert"].iloc[
0
]
assert pomerol_mean == 0
def test_fill_missing_scores(cleaning_raw: Cleaning):
cleaning_raw._vins["Appellation"] = cleaning_raw._vins["Appellation"].str.strip()
cleaning_raw.drop_empty_appellation()
filled = cleaning_raw.fill_missing_scores().getVins()
for col in cleaning_raw.SCORE_COLS:
assert filled[col].isna().sum() == 0
pauillac_robert = filled[filled["Appellation"] == "Pauillac"]["Robert"]
assert (pauillac_robert == 95.0).all()
def test_encode_appellation(cleaning_raw: Cleaning):
cleaning_raw._vins["Appellation"] = cleaning_raw._vins["Appellation"].str.strip()
out = (
cleaning_raw.drop_empty_appellation()
.fill_missing_scores()
.encode_appellation()
.getVins()
)
assert "App_Appellation" not in out.columns
assert "App_Pauillac" in out.columns
assert int(out.loc[0, "App_Pauillac"]) == 1

View File

@@ -1,314 +0,0 @@
from json import dumps
from unittest.mock import patch, mock_open
import pytest
from requests_mock import Mocker
from scraper import Scraper
@pytest.fixture(autouse=True)
def mock_site():
with Mocker() as m:
m.get(
"https://www.millesima.fr/",
text=f"""
<html>
<body>
<script id="__NEXT_DATA__" type="application/json">
{dumps({
"props": {
"pageProps": {
"initialReduxState": {
"product": {
"content": {
"items": [],
"attributes": {}
}
}
}
}
}
})}
</script>
</body>
</html>
""",
)
m.get(
"https://www.millesima.fr/poubelle",
text=f"""
<html>
<body>
<h1>POUBELLE</h1>
<script id="__NEXT_DATA__" type="application/json">
{dumps({
"props": {
"pageProps": {
}
}
})}
</script>
</body>
</html>
""",
)
json_data = {
"props": {
"pageProps": {
"initialReduxState": {
"product": {
"content": {
"_id": "J4131/22-11652",
"partnumber": "J4131/22",
"productName": "Nino Negri : 5 Stelle Sfursat 2022",
"productNameForSearch": "Nino Negri : 5 Stelle Sfursat 2022",
"storeId": "11652",
"seoKeyword": "nino-negri-5-stelle-sfursat-2022.html",
"title": "Nino Negri : 5 Stelle Sfursat 2022",
"items": [
{
"_id": "J4131/22/C/CC/6-11652",
"partnumber": "J4131/22/C/CC/6",
"taxRate": "H",
"listPrice": 842,
"offerPrice": 842,
"seoKeyword": "vin-de-charazade1867.html",
"shortdesc": "Une bouteille du meilleur vin du monde?",
"attributes": {
"promotion_o_n": {
"valueId": "0",
"name": "En promotion",
"value": "Non",
"sequence": 80,
"displayable": "False",
"type": "CHECKBOX",
"isSpirit": False,
},
"in_stock": {
"valueId": "L",
"name": "En stock",
"value": "Livrable",
"sequence": 65,
"displayable": "true",
"type": "CHECKBOX",
"isSpirit": False,
},
"equivbtl": {
"valueId": "1",
"name": "equivbtl",
"value": "1",
"isSpirit": False,
},
"nbunit": {
"valueId": "1",
"name": "nbunit",
"value": "1",
"isSpirit": False,
},
},
"stock": 12,
"availability": "2026-02-05",
"isCustomizable": False,
"gtin_cond": "",
"gtin_unit": "",
"stockOrigin": "EUR",
"isPrevSale": False,
}
],
"attributes": {
"appellation": {
"valueId": "433",
"name": "Appellation",
"value": "Madame-Loïk",
"url": "Madame-loik.html",
"isSpirit": False,
"groupIdentifier": "appellation_433",
},
"note_rp": {
"valueId": "91",
"name": "Peter Parker",
"value": "91",
"isSpirit": False,
},
"note_jr": {
"valueId": "17+",
"name": "J. Robinson",
"value": "17+",
"isSpirit": False,
},
"note_js": {
"valueId": "93-94.5",
"name": "J. cherazade",
"value": "93-94",
"isSpirit": False,
},
},
}
}
}
}
}
}
html_product = f"""
<html>
<body>
<h1>MILLESIMA</h1>
<script id="__NEXT_DATA__" type="application/json">
{dumps(json_data)}
</script>
</body>
</html>
"""
m.get(
"https://www.millesima.fr/nino-negri-5-stelle-sfursat-2022.html",
text=html_product,
)
html_product = f"""
<html>
<body>
<h1>MILLESIMA</h1>
<script id="__NEXT_DATA__" type="application/json">
{dumps(json_data)}
</script>
</body>
</html>
"""
list_pleine = f"""
<html>
<body>
<h1>LE WINE</h1>
<script id="__NEXT_DATA__" type="application/json">
{dumps({
"props": {
"pageProps": {
"products": [
{"seoKeyword": "/nino-negri-5-stelle-sfursat-2022.html",},
{"seoKeyword": "/poubelle",},
{"seoKeyword": "/",}
]
}
}
}
)}
</script>
</body>
</html>
"""
list_vide = f"""
<html>
<body>
<h1>LE WINE</h1>
<script id="__NEXT_DATA__" type="application/json">
{dumps({
"props": {
"pageProps": {
"products": [
]
}
}
}
)}
</script>
</body>
</html>
"""
m.get(
"https://www.millesima.fr/wine.html",
complete_qs=False,
response_list=[
{"text": list_pleine},
{"text": list_vide},
],
)
# on return m sans fermer le server qui simule la page
yield m
@pytest.fixture
def scraper() -> Scraper:
return Scraper()
def test_soup(scraper: Scraper):
vide = scraper.getsoup("")
poubelle = scraper.getsoup("poubelle")
contenu = scraper.getsoup("nino-negri-5-stelle-sfursat-2022.html")
assert vide.find("h1") is None
assert str(poubelle.find("h1")) == "<h1>POUBELLE</h1>"
assert str(contenu.find("h1")) == "<h1>MILLESIMA</h1>"
def test_appellation(scraper: Scraper):
vide = scraper.getjsondata("")
poubelle = scraper.getjsondata("poubelle")
contenu = scraper.getjsondata("nino-negri-5-stelle-sfursat-2022.html")
assert vide.appellation() is None
assert poubelle.appellation() is None
assert contenu.appellation() == "Madame-Loïk"
def test_fonctionprivee(scraper: Scraper):
vide = scraper.getjsondata("")
poubelle = scraper.getjsondata("poubelle")
contenu = scraper.getjsondata("nino-negri-5-stelle-sfursat-2022.html")
assert vide._getattributes() is not None
assert vide._getattributes() == {}
assert vide._getcontent() is not None
assert vide._getcontent() == {"items": [], "attributes": {}}
assert poubelle._getattributes() is None
assert poubelle._getcontent() is None
assert contenu._getcontent() is not None
assert contenu._getattributes() is not None
def test_critiques(scraper: Scraper):
vide = scraper.getjsondata("")
poubelle = scraper.getjsondata("poubelle")
contenu = scraper.getjsondata("nino-negri-5-stelle-sfursat-2022.html")
assert vide.parker() is None
assert vide.robinson() is None
assert vide.suckling() is None
assert vide._getcritiques("test_ts") is None
assert poubelle.parker() is None
assert poubelle.robinson() is None
assert poubelle.suckling() is None
assert poubelle._getcritiques("test_ts") is None
assert contenu.parker() == "91"
assert contenu.robinson() == "17"
assert contenu.suckling() == "93.5"
assert contenu._getcritiques("test_ts") is None
def test_prix(scraper: Scraper):
vide = scraper.getjsondata("")
poubelle = scraper.getjsondata("poubelle")
contenu = scraper.getjsondata("nino-negri-5-stelle-sfursat-2022.html")
assert vide.prix() is None
assert poubelle.prix() is None
assert contenu.prix() == 842.0
def test_informations(scraper: Scraper):
contenu = scraper.getjsondata("nino-negri-5-stelle-sfursat-2022.html")
assert contenu.informations() == "Madame-Loïk,91,17,93.5,842.0"
vide = scraper.getjsondata("")
poubelle = scraper.getjsondata("poubelle")
assert vide.informations() == "None,None,None,None,None"
assert poubelle.informations() == "None,None,None,None,None"
def test_search(scraper: Scraper):
m = mock_open()
with patch("builtins.open", m):
scraper.getvins("wine.html", "fake_file.csv", True)
assert m().write.called
all_writes = "".join(call.args[0] for call in m().write.call_args_list)
assert "Madame-Loïk,91,17,93.5,842.0" in all_writes