(Go: >> BACK << -|- >> HOME <<)

A year in books: philosophy, psychology, and political economy

If you follow the Julian calendar — which I do when I need a two week extension on overdue work — then today is the first day of 2015.

Happy Old New Year!

This also means that this is my last day to be timely with a yet another year-in-review post; although I guess I could also celebrate the Lunar New Year on February 19th. Last year, I made a resolution to read one not-directly-work-related book a month, and only satisfied it in an amortized analysis; I am repeating the resolution this year. Since I only needed two posts to catalog the practical and philosophical articles on TheEGG, I will try something new with this one: a list and mini-review of the books I read last year to meet my resolution. I hope that based on this, you can suggest some books for me to read in 2015; or maybe my comments will help you choose your next book to read. I know that articles and blogs I’ve stumbled across have helped guide my selection. If you want to support TheEGG directly and help me select the books that I will read this year then consider donating something from TheEGG wishlist.

Read more of this post

Personification and pseudoscience

If you study the philosophy of science — and sometimes even if you just study science — then at some point you might get the urge to figure out what you mean when you say ‘science’. Can you distinguish the scientific from the non-scientific or the pseudoscientific? If you can then how? Does science have a defining method? If it does, then does following the steps of that method guarantee science, or are some cases just rhetorical performances? If you cannot distinguish science and pseudoscience then why do some fields seem clearly scientific and others clearly non-scientific? If you believe that these questions have simple answers then I would wager that you have not thought carefully enough about them.

Karl Popper did think very carefully about these questions, and in the process introduced the problem of demarcation:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the the other

Popper believed that his falsification criterion solved (or was an important step toward solving) this problem. Unfortunately due to Popper’s discussion of Freud and Marx as examples of non-scientific, many now misread the demarcation problem as a quest to separate epistemologically justifiable science from the epistemologically non-justifiable pseudoscience. With a moral judgement of Good associated with the former and Bad with the latter. Toward this goal, I don’t think falsifiability makes much headway. In this (mis)reading, falsifiability excludes too many reasonable perspectives like mathematics or even non-mathematical beliefs like Gandy’s variant of the Church-Turing thesis, while including much of in-principle-testable pseudoscience. Hence — on this version of the demarcation problem — I would side with Feyerabend and argue that a clear seperation between science and pseudoscience is impossible.

However, this does not mean that I don’t find certain traditions of thought to be pseudoscientific. In fact, I think there is a lot to be learned from thinking about features of pseudoscience. A particular question that struck me as interesting was: What makes people easily subscribe to pseudoscientific theories? Why are some kinds of pseudoscience so much easier or more tempting to believe than science? I think that answering these questions can teach us something not only about culture and the human mind, but also about how to do good science. Here, I will repost (with some expansions) my answer to this question.
Read more of this post

Philosophy of Science and an analytic index for Feyerabend

FeyerabendThroughout my formal education, the history of science has been presented as a series of anecdotes and asides. The philosophy of science, encountered even less, was passed down not as a rich debate and on-going inquiry but as a set of rules that best be followed. To paraphrase Gregory Radick, this presentation is mere propaganda; it is akin to learning the history of a nation from its travel brochures. Thankfully, my schooling did not completely derail my learning, and I’ve had an opportunity to make up for some of the lost time since.

One of the philosophers of science that I’ve enjoyed reading the most has been Paul Feyerabend. His provocative writing in Against Method and advocation for what others have called epistemological anarchism — the rejection of any rules of scientific methodology — has been influential to my conception of the role of theorists. Although I’ve been meaning to write down my thoughts on Feyerabend for a while, now, I doubt that I will bring myself to do it anytime soon. In the meantime, dear reader, I will leave you with an analytic index consisting of links to the thoughts of others (interspersed with my typical self-links) that discuss Feyerabend, Galileo (his preferred historic case study), and consistency in science.
Read more of this post

Cross-validation in finance, psychology, and political science

A large chunk of machine learning (although not all of it) is concerned with predictive modeling, usually in the form of designing an algorithm that takes in some data set and returns an algorithm (or sometimes, a description of an algorithm) for making predictions based on future data. In terminology more friendly to the philosophy of science, we may say that we are defining a rule of induction that will tell us how to turn past observations into a hypothesis for making future predictions. Of course, Hume tells us that if we are completely skeptical then there is no justification for induction — in machine learning we usually know this as a no-free lunch theorem. However, we still use induction all the time, usually with some confidence because we assume that the world has regularities that we can extract. Unfortunately, this just shifts the problem since there are countless possible regularities and we have to identify ‘the right one’.

Thankfully, this restatement of the problem is more approachable if we assume that our data set did not conspire against us. That being said, every data-set, no matter how ‘typical’ has some idiosyncrasies, and if we tune in to these instead of ‘true’ regularity then we say we are over-fitting. Being aware of and circumventing over-fitting is usually one of the first lessons of an introductory machine learning course. The general technique we learn is cross-validation or out-of-sample validation. One round of cross-validation consists of randomly partitioning your data into a training and validating set then running our induction algorithm on the training data set to generate a hypothesis algorithm which we test on the validating set. A ‘good’ machine learning algorithm (or rule for induction) is one where the performance in-sample (on the training set) is about the same as out-of-sample (on the validating set), and both performances are better than chance. The technique is so foundational that the only reliable way to earn zero on a machine learning assignments is by not doing cross-validation of your predictive models. The technique is so ubiquotes in machine learning and statistics that the StackExchange dedicated to statistics is named CrossValidated. The technique is so…

You get the point.

If you are a regular reader, you can probably induce from past post to guess that my point is not to write an introductory lecture on cross validation. Instead, I wanted to highlight some cases in science and society when cross validation isn’t used, when it needn’t be used, and maybe even when it shouldn’t be used.
Read more of this post

Why academics should blog and an update on readership

It’s that time again, TheEGG has passed a milestone — 150 posts under our belt!– and so I feel obliged to reflect on blogging plus update the curious on the readerships statistics.

About a month ago, Nicholas Kristof bemoaned the lack of public intellectuals in the New York Times. Some people responded with defenses of the ‘busy academic’, and others agreement but with a shift of conversation medium to blogs from the more traditional media Kristof was focused on. As a fellow blogger, I can’t help but support this shift, but I also can’t help but notice the conflation of two very different notions: the public intellectual and the public educator.
Read more of this post

From heuristics to abductions in mathematical oncology

As Philip Gerlee pointed out, mathematical oncologists has contributed two main focuses to cancer research. In following Nowell (1976), they’ve stressed the importance of viewing cancer progression as an evolutionary process, and — of less clear-cut origin — recognizing the heterogeneity of tumours. Hence, it would seem appropriate that mathematical oncologists might enjoy Feyerabend’s philosophy:

[S]cience is a complex and heterogeneous historical process which contains vague and incoherent anticipations of future ideologies side by side with highly sophisticated theoretical systems and ancient and petrified forms of thought. Some of its elements are available in the form of neatly written statements while others are submerged and become known only by contrast, by comparison with new and unusual views.

If you are a total troll or pronounced pessimist you might view this as even leading credence to some anti-scientism views of science as a cancer of society. This is not my reading.

For me, the important takeaway from Feyerabend is that there is no single scientific method or overarching theory underlying science. Science is a collection of various tribes and cultures, with their own methods, theories, and ontologies. Many of these theories are incommensurable.
Read more of this post