Benutzer:Nadine Anskeit: Unterschied zwischen den Versionen

Aus ZUM Grundschullernportal
Keine Bearbeitungszusammenfassung
Markierung: Quelltext-Bearbeitung 2017
Keine Bearbeitungszusammenfassung
Markierung: Quelltext-Bearbeitung 2017
Zeile 1: Zeile 1:
<iframe src="https://learningapps.org/watch?app=10586475" style="border:0px;width:100%;height:500px" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true"></iframe>
A central methodological problem is the validity of empirical measurements. AI-supported writing processes result in texts whose quality can only be used to a limited extent as an indicator of actual competence gains. As Rezat and Schindler (cf. 2025, p. 4) emphasize, bias, hallucinations, or algorithmically induced stylistic smoothing can lead to apparent improvements in output being primarily attributable to technical support rather than individual writing development. Empirical designs must therefore provide for procedures that differentiate between AI effects and actual learning processes, for example through additional process data, qualitative reflections, or control groups that isolate the influence of AI.
Another key area of tension arises from the relationship between output and process focus. While classical writing research often assesses the quality of text products using rating procedures, the creation process, especially the interaction between writers and AI systems, often remains underexposed. Lehnen and Steinhoff (cf. 2022, p. 15f.) describe this interaction as "coactivity" between humans and machines, which requires new forms of data collection. For a valid analysis, therefore, it is not sufficient to consider finished texts: log files, prompt developments, chat histories, or accompanying self-reports can provide insights into cognitive and metacognitive processes and thus significantly broaden the perspective on writing development.
Furthermore, the question arises as to the sustainability of the observed effects. The majority of studies to date have used one-off interventions. Pissarek and Wild (2018) already point to the high value of pre-/post-/follow-up designs in the context of classic text production, which empirically capture both immediate and delayed effects. This is particularly crucial in the context of AI-assisted writing, as learners often benefit from short-term relief effects, but the long-term effects on self-regulation and transfer remain unclear.
Finally, contextual and ethical factors must be systematically incorporated into methodological planning. The lack of transparency in generative models, uncertainties regarding data protection laws, and algorithmic biases (cf. Gethmann et al., 2022, p. 155ff.) can significantly influence not only research practice but also the interpretation of empirical results. Methodologically, this means that studies should disclose technical conditions and explicitly document ethical standards. Comparable reporting standards are also necessary to ensure the replicability of studies and to reveal biases in data collection or interpretation.

Version vom 14. Dezember 2025, 10:47 Uhr

A central methodological problem is the validity of empirical measurements. AI-supported writing processes result in texts whose quality can only be used to a limited extent as an indicator of actual competence gains. As Rezat and Schindler (cf. 2025, p. 4) emphasize, bias, hallucinations, or algorithmically induced stylistic smoothing can lead to apparent improvements in output being primarily attributable to technical support rather than individual writing development. Empirical designs must therefore provide for procedures that differentiate between AI effects and actual learning processes, for example through additional process data, qualitative reflections, or control groups that isolate the influence of AI. Another key area of tension arises from the relationship between output and process focus. While classical writing research often assesses the quality of text products using rating procedures, the creation process, especially the interaction between writers and AI systems, often remains underexposed. Lehnen and Steinhoff (cf. 2022, p. 15f.) describe this interaction as "coactivity" between humans and machines, which requires new forms of data collection. For a valid analysis, therefore, it is not sufficient to consider finished texts: log files, prompt developments, chat histories, or accompanying self-reports can provide insights into cognitive and metacognitive processes and thus significantly broaden the perspective on writing development. Furthermore, the question arises as to the sustainability of the observed effects. The majority of studies to date have used one-off interventions. Pissarek and Wild (2018) already point to the high value of pre-/post-/follow-up designs in the context of classic text production, which empirically capture both immediate and delayed effects. This is particularly crucial in the context of AI-assisted writing, as learners often benefit from short-term relief effects, but the long-term effects on self-regulation and transfer remain unclear. Finally, contextual and ethical factors must be systematically incorporated into methodological planning. The lack of transparency in generative models, uncertainties regarding data protection laws, and algorithmic biases (cf. Gethmann et al., 2022, p. 155ff.) can significantly influence not only research practice but also the interpretation of empirical results. Methodologically, this means that studies should disclose technical conditions and explicitly document ethical standards. Comparable reporting standards are also necessary to ensure the replicability of studies and to reveal biases in data collection or interpretation.