I'll pass. See, the way science works is that I don't have to demonstrate proof of the existing overwhelming peer reviewed consensus data model. Nor do I have to try to teach a basic statistics course. If you're going to make claims like "no one can know...blank" you have the burden to demonstrate why that's so. Hopefully beyond "that sounds hard".
I've already given you a few examples of areas relied upon by that particular graph which are currently in dispute by climate scientists as to the margin of error for early timeframe analasys. If X+Y=Z, but X+Y are potentially different values than what they were plotted on Graph point Z, that potential variance is called a "margin of error" Statistically, the margin of error they are showing for the 10,000 year old values in particular, is utter crap, because the individual variables themselves presently have statistical uncertanties even within their own climate change scientist community that they are flat out ignoring. They picked a set of assumptions, and made a graph. They didn't include any of the uncertanty factors, and from what I have seen they didn't even use the assumed Median values in their uncertanty calculations. They picked data, made a graph to show a particular viewpoint without giving it a sufficient margin for error, and sent it to press. Thats ignoring the whole quality of sample issue from that far back entirely.
It should be obvious that while determining that the average global temperature for March 6, 25000 BCE was 24 C is extraordinarily difficult, that determining the average global temperature between 15000 BCE and 5000 BCE is going to be substantially easier. Accuracy v Precision and all that.
Sure. but only assuming a 0.4 degree +/- margin of error for that data the way they did on the graph based on the underlaying uncertanty and currently disputed variables of the model is being scientifically dishonest if nothing else for the older data they added. The margin of error should quite obviously widen considerably on anything older than 2,000 years as it approaches 10,000 years and our measurements and observable datasets become much more uncertain due to the effects of time and variables we have no historical observational data for. Especially when there is not consensus on early model timeframe climate condition variables which were used to create that model. The basic data and methodology are not at issue here. It's the accuracy of the older datasets, and the currently contradictory theories on various important factors as mentioned previously in the thread that is the problem. Their error bars are crap on the left side of the graph. You seem to think otherwise but provide no evidence to the contrary.
Your assertion that it's just too hard reflects a Victorian understanding of analysis at best. Techniques to establish variance and confidence intervals have existed for quite a while now.
It's not "too hard" It's "impossible" to create that particular graph with that narrow a margian of error without ignoring valid, though less popular (note I don't say "correct") studies that show a greater range of uncertanty on the oldest data sets on variables used to make their graph model. Our resolution of data accuracy degrades significantly the further back you go, and our uncertanties rise. There is a degrading confidence interval on the earliest data sets and a larger margin for error that is not reflected on their graph, which has been my point all along.
Would some diagrams with large arrows on them pointing at simple concepts help? Maybe I could toss in an animated dancing frog or in the margins or something if that would make this easier for you to understand.