The study backend is founded in the computer science field of Natural Language Processing, or NLP, a discipline helping people understand large bodies of writing using a computer. The particular NLP software used is the Natural Language Toolkit, or NLTK. If you're curious you can read about that software here and read a hands-on introduction to the field of NLP here, but you don't need to know any of that to use this site, so if you just want a peek under the hood of the study backend, including the direction it's going down the road, you can keep reading this page.
The basis of NLP is annotations to the text being studied. These annotations convey certain qualities of the text, and by using special search tools that recognize them you can get more understanding. The first annotation is part-of-speech tags. You can read the help on the POS Tags page to learn more about this, and you can use these tags by turning on certain options in the Concordance tool, and in some other tools. Note that, as discussed in the help on the POS Tags page, this is a work in process.
POS tagging is just the first stop in the long journey to annotate the Bible for NLP. Along the way each stop builds on the one before. The milestones after POS tags are:
Identify standard types of phrases, like noun phrases, verb phrases, and prepositional phrases, and link them together based on their dependencies on each other within their sentence. This is like sentence diagramming, which used to be taught in grammar school.
In grammar this is sometimes called predicate-argument analysis, and it's a step beyond syntax into semantics. In other words, while POS tags and parse trees show the structure of the text, propositions begin to show the meaning. Context-sensitive definitions, called semantic roles, are assigned to the different parts of parse trees, starting with the verbs. A verb can have different meanings in different contexts, and propositions annotate parse trees to show this.
A single sentence, although parsed into phrases and annotated to show propositions, is often still just one of a series of sentences that has a larger meaning. Discourses capture this by recognizing terms that connect sentences like "And," "But," "Therefore," and "Nevertheless," and also by recognizing that sentences can be connected implicitly, that is, without an explicit connective term. Then, since a given connective doesn't always have exactly the same meaning, it's annotated with a context-sensitve definition, called its sense, like a verb is annotated with its semantic role in a proposition.
Annotations are set initially by a computer, but then they have to be corrected by people. For example, an automated POS tagging program, called the NLTK averaged perceptron tagger, was used to initially set the POS tag for each word, but many manual edits have since been made, and more are still needed. Also, automated annotation at any step relies on correct annotations at all the previous steps.
Keep in mind that these computerized annotations do not enable some kind of computerized Bible understanding that you can just generate and read. NLP helps you get understanding as you use this site's tools. You're the one with the understanding, not the computer. And even beyond Bible study, you need to spend time just reading the Bible and meditating in it, or your study time won't be very productive. And beyond that, you need to work to obey what you already understand from the Bible if you expect God to give you more understanding.
Finally, remember that annotations are not scripture. The Bible is not subject to change, but annotations are. So, it might be possible at some point for each user of BibleStudy.tools to have their own set of annotation tweaks that they maintain and study under.