Analyzing the thoughts and making amendments

If I want to change your opinion, I need to understand it. We have not done it yet. We are trying to figure out a stage where your opinion cannot be changed. Elisa and I call it ‘a point of no return’ in a person’s opinion.

The Ken: Do you think people would agree to such controls being passed to them? What if, say, as a Modi fan, I take the toggle to the other extreme?

Vishnoi: It’s about choice. Today, there’s no choice in search. It’s a question of personal choice versus depending on Google. Can you trust Google for everything? I think our voices are not reaching Google.

You need to place this in the cultural setting. In European countries, people will slide the toggle to one extreme. Maybe we come up with a mechanism where we don’t let it slide to any extreme. But then, who decides? Ultimately it is a question of authority—the government can decide, a city can decide… When we presented our theory at the conference, people got scared by the idea that individuals have such control over the search.

The fear and psychological question that arose was: If you give control to people, where will they converge?

The Ken: What’s your defense?

The Ken: If today I trust Google, then why can’t I trust my own judgment and use toggles to search, right?

Vishnoi: Exactly. I think it’s our Right to Information (RTI) which is being violated. When I have to learn about the world, it is shown to me with an objective. Google has some ‘function’ which it tries to optimize when it chooses to show stuff when people search.

Just as, today, receiving government announcements through [various departments’] Facebook pages bothers you as a citizen, (and rightly so because the information comes not just in vanilla fashion but is also colored with ‘likes’ and ‘dislikes’), I believe much of our work is geared towards establishing RTI.

The Ken: Is that why you have created a demo on how consuming a certain kind of news—from a left-leaning or right-leaning outlet—can be polarising?

Vishnoi: We wanted to show what a balanced content delivery engine would look like. You can play around with this prototype and see for yourself the difference between the two approaches of content delivery. (Here’s the demo. You can see how our clicks influence the articles we see in our news feeds.)

Vishnoi: Well, let’s say there’s a Wiki page which says Nehru killed Gandhi. How will you check if this is fake news or not?

Any algorithm has to check all historical facts. Some kind of authority comes into play. If you write on a Wiki page then someone will edit it. But this is not the fake news that is dangerous because someone will edit/correct the Wiki page.

Unless we have some notion of what constitutes fake news, how can you address this problem? By the way, fact-checking is not a trivial thing. Somebody saying Rahul’s [Gandhi] education is XYZ. I can make a statement about the algorithm now. How will you check that? One check could be that if I say nonsensical things my reputation will be on the line. And if you publish then your credibility will be on the line. That could act like checks. I will not state factually wrong things. But that’s not a logical way to come to a conclusion. The confidence [about the truthfulness of news] has to come from a source.

Vishnoi: People are working on fake news detection. You can think of a solution in specific contexts only; rationality has to be somewhat bounded. Say there is a fact that India gained independence in 1949. Then to check this there must be a certain authority on this type of fact and the algo can check there.

 

 

 

Leave a Comment