I created a Youtube Channel to explain my work and some of the machine learning techniques I use. Here’s the first episode of the deep learning series. Enjoy!
Mood disorders such as anxiety and depression can be invisible but take a heavy toll on one’s life daily. Growing up in a culture that mental health is never discussed, I witness one of my parents struggled with these conditions over the years but barely received support from families and friends. Symptoms that are typical to these conditions are often attributed to as having a ‘bad’ personality, and disclosing life difficulties is considered as weak and shameful. “You don’t need others to know you are having a bad day. People will just think that you are incapable of handling your own issues.” This is a common coping strategy in a culture that suppresses negative emotions.
20 years later, we have lots of online mental health support discussion forums. People who are living with these conditions in a mental illness stigmatized culture can seek support anonymously from strangers who care about their conditions. A big question is, how do I know if I have mood disorders without going to see a doctor? The answer is no, you can’t. Mental illness diagnosis is difficult, part of the reasons is because the current diagnostic criteria that have been used for more than half a century is simply grouping the symptoms together but not considering the underlying biological mechanism. Diagnosing a physical illness is to identify the problematic organ, symptoms may be different but they all stem from a malfunction of a particular organ or multiple organs. However, it’s usually very difficult to locate the cause of mental illnesses. It could be originated from hormone issues, brain injuries, sudden adverse life events, childhood trauma, genetic components and so on. Doctors have been trained for years to learn the symptoms case by case over hundreds and thousands of hours of conversation with patients. Deciding whether the symptoms are more similar to one type of disorder or another is difficult even for doctors, don’t mention the current AI that base on some social media records.
However, AI cannot make a diagnosis doesn’t mean that it can not support a human decision. In the case that you don’t want to go for a diagnosis before you know something is off, AI can probably help with the early detection. We are now developing technologies that help people to understand if they have shown signs of mood disorders based on social media records. These technologies might ring the alarm bell in your mind that this is the time you should sign up for an anonymous online support group even if you really don’t want to see a doctor because you are afraid your family and friends will find out. In our work satisfaction Building a profile of subjective well-being for social media users and Inspecting Vulnerability to Depression From Social Media Affect we found that social media data contain signals that can be used to infer one’s subjective well being and mental health status. During my PhD, I especially focus on social media signals that may be directly associated with symptoms of affective disorders. The emotions and feelings of a person are called ‘affect’ in psychology. Affective pattern is an important signal for affective (mood) disorders, such as anxiety and depression. In the paper The Effect of User Psychology on the Content of Social Media Posts: Originality and Transitions Matter (accepted), we found the transition states of valence (not just its magnitude and frequency) reflect one’s personality and mental health status. We found that extroverted participants are more likely to transition from a positive mood to a positive mood. In a paper we submitted two months ago, we constructed a mood profile for social media users based on their posts, we the mood profile is highly relevant to the level of depressive symptoms.
Recently JunYan Jiang published a Chinese Political Elite Database (CPED), which contains demographic and career information of Chinese political leaders from multiple levels. (ref https://www.junyanjiang.com/data.html) It’s a very interesting dataset, here I will use unsupervised machine learning to explore the structure of the dataset and then applied a supervised learning model to predict the highest position of the political leaders with all their background information. All the codes are here https://github.com/luciasalar/government_officials.git
Information in the CPED including: name, gender, ethnicity, birthday, birthplace, education back ground, whether the person join the army, whether the person has been expelled from the Communist Party of China(CPC), current position, whether the person commits a crime, when does the person join CPC, how long does the person work in the government, when and where does the person been relocated, job grade, name of the position and so on.
Before I do any learning, I did a bit of work to process the dataset: 1) convert categorical variables to dummy variables. 2) I extended some variables based on the existing variables on the dataset. I added age by calculating the time difference between now and birthday; added frequency to be relocated (time worked the in government divided by the number of times they were relocated); added number of times worked in the central government; added percentage of central government position to all the positions of each person; added number of times a person been relocated by a national institute or a central government; added number of years they have been working in the government; added number of years they have been in the CPC 3) I also recode the location of where they worked (developing countries and developed countries are assigned into two big groups). However, the location later turns out to be a confusing variable later in the cluster analysis. Maybe I need to identify the location they worked for the longest time and use GDP of the location as weights.
In order to know what variables can predict the highest position of a government official, I do a cluster analysis on the variables. Here I use model based clustering,. The advantage of model based clustering is that it can adapt to Gaussian with non-spherical variance. After a couple of attempts, I find all the extended variables I generated and the job grade produce 9 clusters. Adding other variables, especially the location only confounds the cluster results.
Best BIC values: EEV,9 EEV,8 EEV,7 BIC -40592.79 -44475.21 -48851.12 BIC diff 0.00 -3882.42 -8258.33 $mean [,1] [,2] [,3] [,4] [,5] [,6] [,7] central_freq 4.32237099 8.1333333 3.2921195 6.979184e-01 0.64672483 4.67187223 7.3296655 relocate_freq 12.61755297 19.6000000 18.5603415 1.779856e+01 13.79750420 19.30010818 19.5700083 nat_ins_relo 4.19084397 9.6000000 6.4393863 3.455870e+00 3.99058070 6.99442958 9.5765742 central_relo 0.00000000 1.8333333 0.5695582 6.904707e-04 0.18754764 0.33255111 1.9551820 级别_deputy_director 0.75474248 1.5333333 3.1303437 3.671810e+00 2.48911104 2.07742828 1.5326028 级别_deputy_leader 0.00000000 2.8666667 0.0000000 0.000000e+00 0.00000000 0.06536624 2.0542140 级别_deputy_dept 0.06884181 0.7333333 1.4554218 2.068585e+00 1.28341355 1.31928172 0.9915247 级别_vice_minister 2.49509706 3.2333333 3.2460501 5.323188e-01 0.32912791 4.29367926 3.5884937 级别_less_dept 0.61317397 2.0333333 2.7168478 3.333408e+00 2.39241514 2.71185414 2.4306439 级别_no_rank 0.89761799 1.6333333 1.7226345 1.670180e+00 1.90671226 1.70147894 1.6974206 级别_director 2.09337233 1.5333333 4.3141123 3.500887e+00 3.25762003 2.73690397 2.5425752 级别_national_leader 0.00000000 2.3666667 0.0000000 0.000000e+00 0.00000000 0.00000000 0.0000000 级别_dept 0.33755477 1.2666667 1.9749013 2.942800e+00 2.04231632 1.76069380 1.4218369 级别_minister 1.13393696 2.1333333 0.0000000 0.000000e+00 0.00000000 2.63342184 2.4670000 gov_working_yrs 34.77427264 89.1850000 52.8895299 5.021180e+01 51.41595474 70.20722067 65.0047612 age 69.76954176 80.0189208 62.9543031 6.287416e+01 64.06984368 74.19215959 75.3007443 join_cpc 43.65866150 58.0666667 38.6672922 3.828230e+01 40.33759873 50.62669711 50.1089734 join_cpc_age 25.24244381 22.6220278 23.4355524 2.372276e+01 23.57874446 22.83489307 24.5402508 freq_change_pos_nor 3.02921911 4.9328471 2.9339499 2.935939e+00 4.01097470 3.75754392 3.4275268 central_freq_perce 0.36299367 0.4209026 0.1859861 3.710309e-02 0.04916415 0.25133780 0.3650817 [,8] [,9] central_freq 10.34535743 0.000000000 relocate_freq 19.34634107 6.950475840 nat_ins_relo 11.18794437 0.190331466 central_relo 0.56318475 0.164688275 级别_deputy_director 1.84379429 0.766288984 级别_deputy_leader 0.06257608 0.000000000 级别_deputy_dept 1.53382976 0.113210760 级别_vice_minister 3.28088365 0.609826329 级别_less_dept 1.31481036 0.081245263 级别_no_rank 1.65582924 0.130124281 级别_director 3.31431693 2.268522452 级别_national_leader 0.00000000 0.000000000 级别_dept 1.74969366 0.277415251 级别_minister 2.27820879 0.006609097 gov_working_yrs 47.96463094 17.907235176 age 68.92297628 66.174523443 join_cpc 43.81333803 40.977279532 join_cpc_age 24.99038289 23.577696437 freq_change_pos_nor 2.51541676 3.108479572 central_freq_perce 0.53670229 0.000000000
The above table shows all the variables I used in clustering, freq = frequency, nat_ins_relo = relocated by national institute, central_relo = relocated by central government, gov_working_yrs: number of years working in the government, join_cpc: number of years they join CPC; join_cpc_age : age when they join CPC; central_freq : number of times they worked in the central government; freq_change_pos_nor: frequency of being relocated normalized by the number of years they work in the government; central_freq_perce: number of times working in the central government divided by the number of times being relocated.
It’s hectic work to see what information contained in these groups manually, so I wrote a function to see which cluster has the highest mean score in each variable. We can see that there are a few clusters that are quite important, group 2 contains most national leaders. Let’s call it ‘the leader group; group 3 has most directors let’s call it ‘director group’, group 4 has most deputy directors and deputy department heads, department heads, lower than department head positions, let’s call this group ‘department heads’; group 5 contains officials without rankings, let’s call it ‘no ranking’. group 6 is the ‘ministers’ group, group 8 contains people who works in the central government most number of times. We can also see that officials in the leader group have highest mean age. Group 1 contains least high level government officials, officials in group 9 work in the government shortest period of time. Now we know that unsupervised learning managed to learn some patterns in these variables.
Now let’s do a regression and see if these variables can predict the job grade. We selected the highest job grade of each official as the job grade label. We can see that all the variables we selected are significant in the prediction.
lm(formula = job_grade ~ ., data = reg_fea) Residuals: Min 1Q Median 3Q Max -3.2911 -0.4882 -0.0403 0.4308 5.1570 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.198359 0.159396 51.434 < 2e-16 *** central_freq 0.098955 0.013897 7.121 1.27e-12 *** Freq -0.052931 0.005458 -9.697 < 2e-16 *** nat_ins_relo 0.003095 0.006357 0.487 0.626389 central_relo -0.052582 0.014540 -3.616 0.000303 *** time_diff -0.019530 0.001595 -12.245 < 2e-16 *** age -0.016151 0.005317 -3.038 0.002398 ** join_cpc -0.016090 0.005559 -2.895 0.003817 ** join_cpc_age -0.020595 0.006256 -3.292 0.001004 ** freq_change_pos_nor 0.182062 0.019311 9.428 < 2e-16 *** central_freq_per -2.759989 0.207682 -13.290 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8512 on 3906 degrees of freedom Multiple R-squared: 0.4649, Adjusted R-squared: 0.4635 F-statistic: 339.3 on 10 and 3906 DF, p-value: < 2.2e-16
Ok, the final part is machine learning. Here I produce a very very basic svm model with R. I’ll do a proper ML with Python on a lazy weekend. The basic model turns our really not bad! First, I recode the job grade to binary, with anyone under the minister level as 0, ministers and national leaders are 1. We get a balanced set of data.
0 1 1941 1982
The F1 score is
predictions y 0 1 0 477 97 1 156 446
In NLP, we often come across the concept of cosine similarity. Especially when we need to measure the distance between the vectors. I was always wondering why don’t we use Euclidean distance instead. I understand cosine similarity is a 2D measurement, whereas, with Euclidean, you can add up all the dimensions.
So here I find a ‘Grok’ explanation on Quora.
You are a very polite person and you liked my answer..so in the comment section you have written “good” 4 times and “helpful” 8 times(just numbers!! :))…something like….” a very good answer which is too much helpful. It will be helpful for good understanding. People who are not that good in maths..Can find the answer helpful…”…and so on….
A friend of you..Who doesn’t talk much..Might write just- “good and helpful..I found it helpful for my studies”
What is the count? “Good”-1, and “helpful”-2
If I try to find the cosine similarities between these comments(or..Documents, as told in a miner’s term :))..It will be exactly 1! (Refer Google to see the formula, it’s ultra easy)
There you go, with cosine similarity, you measure the similarity of the direction instead of magnitude.
I’m always interested in qualitative analysis out of curiosity, although I might/ might not include it in my studies in the future. After years of doing quantitative studies, it might be a good idea to learn something from the other side. Therefore, I attended this workshop: ‘Introduction to qualitative analysis’ in the Welcome Trust Research Centre. The first lecture was loaded with theory, which I mainly forget most of it but it’s not difficult to pick it up by reading a few papers. The main focus of the theory is to identify which type of qualitative study you want to conduct. Is it the grounded theory, which focus on finding the facts and generate theory or the phenomenology, which focus on exploring the feeling and unique experience of the participants.
Then it comes to a very interesting part this afternoon, in which the students did a role play as interviewer and interviewee. We were divided into groups, each group has two interviewers (one grounded theory, one phenomenology) and one interviewee. We were all given the same research question: What factors might affect a student to drop out/continue the PhD degree. I choose the grounded theory approach because I think exploring unique experience is at a more advance level.
First of all, I come up with a bunch of questions direct to the possible factors that could influence a student to continue the degree. As a psychologist, I tend to put everything in a structure at the very beginning. “Which theory are you using? Do you know the structure of this theory? ” I still remember years ago when I started my first research project, this is the question from my supervisor.
(external/internal)Motivation, social support, financial support should be the important determinants. Other determinants might not be quite diverse, but I guess this is why we need an interview instead of scales.
Here’s the list of questions from me:
debrief the purpose of my study.
demographic questions, which degree she’s doing, which year, etc
- Do you have research experience before you started your PhD? Participant said yes, then I continue to explore what was that and was it related to her research field at the moment.
- Do you like the research area you are working on? Why or why not?
- Have you started a family before you do your PhD. If yes, do you think catering the needs of your family and the degree at the same time is a bit difficult? Why?
- Do you have close friends who study a phD or doing similar research? Do you often talk to them? Why or why not? Do you think your friends and family are supportive? Why?
- Do you think your supervisors are supportive? why? Participants said she has 3 supervisors. Then I continue to ask do you do group meetings regularly? Do your supervisors have different opinions about your research topics?
- What do you think about your working environment? Why do you like your office?
- Do you think your colleagues are supportive? Do you work on a project together or do you have plans to work with them in the future?
- How’s your funding situations at the moment?
- What factors do you think will affect your decision to not continue your PhD and why?
These questions mainly covers the hypothesis in my research questions and also encourage the participant to speak out others factors that my questions are not included.
Here are the questions from the phenomenology side:
Why do you want to start a phD degree?
What are your expectations?
Which expectations are met and which are not met?
What stuffs went well?
How to improve your experience?
What didn’t go well?
In general, we retrieve similar content and the phenomenology approach managed to explore specific questions at a deeper level. Since the same participant did the two interviews, her reply to the phenomenology questions might be probed by my questions a bit. There were a lot of moments that the participants find it difficult to answer phenomenology questions because it requires a lot of reflection.
The lecturer seems to appreciate my strategy but other students seem to think that I’m asking superficial questions try to direct how the participant thinks by asking these questions as prompts. From the perspective of a psychologist, it’s good to listen to a narrative description from a participant to identify a problem, however, it’s very often that a participant might not aware that something is an issue that affects how she/he feels about a task. That’s why we need to put everything in a structure, and see if the theory apply to individual cases to a certain degree, then we analysis why or why not it doesn’t apply or what’s missing in there.
In conclusion, I think this is a very interesting practice and I have a wonderful experience in both of the interview approaches. I hope I will have chance to experience more of this in the future.