This notebook explores the Finance Proposition Bank dataset. FinProp consists of proposition bank-style annotations of finance domain sentences extracted from former IBM annual financial reports. Each of the ~1,000 sentences are annotated with a layer of “universal” Semantic Role Labels covering parts of speech, argument labeling, and predicate labeling.
Semantic Role Labeling (SRL) is a process in natural language processing that deals with structurally representing the meaning of a sentence. SRL assigns labels to words in a sentence to demonstrate the semantic relationship between the predicate and its arguments in that sentence. By generating relational language data, SRL aims to facilitate identifying the who, whom, what, where, and when in any particular sentence. SRL is particularly useful in natural language domains such as question & answering, machine translation, document summarization, and information extraction.
The FinProp dataset serves as an example of over 1,000 legal domain sentences processed via SRL. This dataset makes for great training data to train a deep neural network to perform SRL on unlabeled finance domain language. The dataset is open sourced by IBM Research and is available to download freely on the IBM Developer Data Asset Exchange: Finance Proposition Bank Dataset. This notebook can be found on Watson Studio: Finance Proposition Bank Notebook.
from IPython.display import clear_output
# Download & load required python packages
!pip install conllu
!pip install wordcloud
from IPython.display import Image
import requests
import hashlib
from pathlib import Path
from collections import defaultdict, Counter
import tarfile
from os import path
import matplotlib.pyplot as plt
import re
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem.wordnet import WordNetLemmatizer
import nltk
nltk.download('all')
from wordcloud import WordCloud
clear_output()
Lets download the dataset from the Data Asset Exchange Cloud Object Storage bucket and extract the tarball.
# Download the dataset
fname = 'finance_proposition_bank.tar.gz'
url = 'https://dax-cdn.cdn.appdomain.cloud/dax-finance-proposition-bank/1.0.2/' + fname
r = requests.get(url)
Path(fname).write_bytes(r.content)
# Extract the dataset
with tarfile.open(fname) as tar:
tar.extractall()
# Verify the file was extracted properly by comparing sha512 checksums
sha512sum = '368413a337a786f7471b54ea16b4ffa9031de0e15bd6de393d82ce20ec704a601fd1bf2dac0cbb34a2743483299e59f4af94182843b35e261bf00cd725aa476c'
sha512sum_computed = hashlib.sha512(Path('finance_proposition_bank.conllx').read_bytes()).hexdigest()
sha512sum == sha512sum_computed
Lets read the data into a python object that can be used to learn more about SRL. Note, the conllu package allows for custom parsing of the data to accomodate custom variations of CoNLL data. Find more information about this on the project's PyPI page: https://pypi.org/project/conllu/
# We will use the conllu package to import the CoNLL formatted data file
from conllu import parse
# Read CoNLL text data into the data variable
data = Path('finance_proposition_bank.conllx').read_text()
# Use conllu's parse method to tokenize the raw data. Because this is CoNLL-X format data, we customize the parsing as explained in the next cell.
sentences = parse(data, fields=['id', 'form', 'lemma', 'cpostag', 'postag', 'feats', 'head', 'deprel', 'phead', 'pdeprel'])
More information about the CoNLL-X format can be found in this article: https://www.aclweb.org/anthology/W06-2920.pdf. Taken directly from the article, each of the fields represent:
ID: Token counter, starting at 1 for each new sentence.
FORM: Word form or punctuation symbol. For the Arabic data only, FORM is a concatenation of the word in Arabic script and its transliteration in Latin script, separated by an underscore. This representation is meant to suit both those that do and those that do not read Arabic.
LEMMA: Lemma or stem (depending on the particular treebank) of word form, or an underscore if not available. Like for the FORM, the values for Arabic are concatenations of two scripts.
CPOSTAG: Coarse-grained part-of-speech tag, where the tagset depends on the treebank.
POSTAG: Fine-grained part-of-speech tag, where the tagset depends on the treebank. It is identical to the CPOSTAG value if no POSTAG is available from the original treebank.
FEATS: Unordered set of syntactic and/or morphological features (depending on the particular treebank), or an underscore if not available. Set members are separated by a vertical bar (|).
HEAD: Head of the current token, which is either a value of ID, or zero (’0’) if the token links to the virtual root node of the sentence. Note that depending on the original treebank annotation, there may be multiple tokens with a HEAD value of zero.
DEPREL: Dependency relation to the HEAD. The set of dependency relations depends on the particular treebank. The dependency relation of a token with HEAD=0 may be meaningful or simply ’ROOT’ (also depending on the treebank).
PHEAD: Projective head of current token, which is either a value of ID or zero (’0’), or an underscore if not available. The dependency structure resulting from the PHEAD column is guaranteed to be projective (but is not available for all data sets), whereas the structure resulting from the HEAD column will be non-projective for some sentences of some languages (but is always available).
PDEPREL: Dependency relation to the PHEAD, or an underscore if not available.
Citation: Buchholz, Sabine & Marsi, Erwin. CoNLL-X Shared Task on Multilingual Dependency Parsing. International Journal of Web Engineering and Technology - IJWET. 10.3115/1596276.1596305. (2006).
Lets visualize the parsed data to discover how it is structured and what we can do with it.
# Sentences is a list of tokenlist objects
type(sentences)
# Number of sentences we've imported
len(sentences)
# Example TokenList sentence
sentences[10]
# Example tokenlist sentence Ordered Dictionary fields
for fld in sentences[10]:
print("\n" + str(fld))
# The code was removed by Watson Studio for sharing.
# The code was removed by Watson Studio for sharing.
# Using https://universaldependencies.org/conllu_viewer.html we can visualize CoNLL formatted annotations in a tree-like format
Image(filename='FinPropBank - CoNLL Sentence Visualized.png')
# Let's count all the Part of Speech tags (POSTAG field) in our dataset
postag_count = defaultdict(lambda: 0)
for snt in sentences:
for wrd in snt:
postag = wrd['postag']
postag_count[postag] += 1
print(postag_count)
# Now we can plot our most common part of speech tag occurrences
plt.figure(figsize=(15,10))
plt.bar(range(len(postag_count)), postag_count.values(), align='center')
plt.xticks(range(len(postag_count)), list(postag_count.keys()), rotation='vertical')
plt.show()
Looks like NN
= singular noun (e.g. llama), IN
= preposition (e.g. of, in, by), and NNS
= plural noun (e.g. tigers) are the three most common part of speech tags in our dataset. For more info on part of speech tagging, checkout this presentation by the Stanford NLP group.
# Let's create a word cloud of all of the words included in this dataset. First we'll store all the available text in one string variable
text = [str(snt)[10:] for snt in sentences]
# Next we remove casing, punctuation, special characters, and stop words and also lemmatize the words
my_new_text = re.sub('[^ a-zA-Z0-9]', '', str(text))
stop_words = set(stopwords.words('english'))
lemma = WordNetLemmatizer()
word_tokens = word_tokenize(my_new_text.lower())
filtered_sentence = (w for w in word_tokens if w not in stop_words)
normalized = " ".join(lemma.lemmatize(word) for word in filtered_sentence)
# Now we can create the word cloud
wordcloud = WordCloud(max_font_size=60).generate(normalized)
plt.figure(figsize=(16,12))
'''plot wordcloud in matplotlib'''
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
# Lets count the most frequent words
count = Counter(normalized.split())
count100 = {word: times for word, times in count.items() if times > 100}
# And finally we plot the most frequent words
plt.figure(figsize=(15,10))
plt.bar(range(len(count100)), count100.values(), align='center')
plt.xticks(range(len(count100)), list(count100.keys()), rotation='vertical')
plt.show()