It is helpful to classify documents or other content items to make them easier to find later. Searching the full text alone can retrieve inaccurate results or miss appropriate documents containing different words from the words entered into a search box. A document or content management system may include features for tagging, keywords, categories, indexing, etc. Taxonomist Heather Hedden identifies the difference between these elements to facilitate the implementation of more effective knowledge and content management.
Marcus Zillman’s guide highlights multifaceted browser alternatives to mainstream search tools that researchers may regularly use by default. There are many reliable yet underutilized applications that facilitate access to and discovery of subject matter specific documents and sources. Free applications included here also offer collaboration tools, resources to build and manage repositories, to employ data visualization, to create and apply metadata management, citations, bibliographies, document discovery and data relationship analysis.
Pete Weiss is the author of Pete Recommends – Weekly highlights on cyber security. He is a strong advocate of RSS to keep pace with rapidly changing updates in the news, research and technology to name but a few subjects. Using Sabrina Pacifici’s blog, beSpacific, as an example, Weiss offers more than a dozen regularly updated subject matter specific feeds that you should consider adding to your research portfolio.
Former CPA, writer and teacher Ken Boyd provides readers with an explanation of tax fraud that is clearly presented, instructive and relevant to the ongoing Mueller investigation. Boyd uses the extensive New York Times investigative report of November 2018 that documented a history of tax fraud allegedly committed by Donald Trump, his father and siblings, as the foundation for his lesson on various types of tax fraud. The allegations documented by the Times are under review by the New York State Department of Taxation and Finance.
How big is the Deep Web? It is estimated to comprise 7,500 terabytes – although an exact size is not known, and the figures vary widely on this question. The magnitude, complexity and siloed nature of the Deep Web is a challenge for researchers. You cannot turn to one specific guide or one search engine to effectively access the vast range of information, data, files and communications that comprise it. The ubiquitous search engines index, manage and deliver results from the Surface web. These search results include links, data, information, reports, news, subject matter content and a large volume of advertising that is optimized to increase traffic to specific sites and support marketing and revenue focused objectives. On the other hand, the Deep Web – which is often misconstrued as a repository of dark and disreputable information [Note – it is not the Dark Web], has grown tremendously beyond that characterization to include significant content on a wide range of subject matters covering a broad swath of files and formats, databases, pay-walled content as well as communications and web traffic that is not otherwise accessible through the surface Web. This comprehensive multifaceted guide by Marcus Zillman providers you with an abundance of resources to learn about, search, apply appropriate privacy protections, and maximize your time and efforts to conduct effective and actionable research within the Deep Web.
Christopher Kenneally interviewed Marcy Phelps on his Copyright Clearance Center’s podcast series, Beyond the Book. A licensed private eye who earned her master’s degree in library and information science from the University of Denver, Marcy Phelps works for asset management firms, commodity pool operators, M&A professionals, and others. Her detective work combing through databases and other online data dumps helps build a definitive dossier documenting any litigation, bankruptcies, and regulatory actions that could raise unpleasant questions for investors and even uncover unsavory characters.
Alan Rothman suggests a new phrase for a growing subject matter area which he calls Fact-Check Tech. His article introduces to use a prototype TV news voice scanner and fact-checker called Voyc. The significance of this new technology will quickly become apparent to news consumers here in the U.S., and around the world, as we are increasingly confronted with endless charges of “fake news” and counter assertions of what is “real news.” The Voyc technology currently under development can assess the audio of live news media broadcasts to determine the veracity of statements made within seconds of being spoken.
Data, BI & Analytics expert Siraj Patel discusses the global financial services and products industry in the context of the urgency for existing business models to adapt and innovate in this time of disintermediation, product un-bundling and marketplaces that offer customer rapidly changing banking options.
This new comprehensive guide to reliable and wide ranging resources on the New Economy by Marcus Zillman provides researchers who focus on law, finance and business sectors with many options from which to choose specific to sources of data, analytical information, statistics and knowledge published by the federal government, corporations, NGOs, nonprofits and subject matter experts as well as publishers. Zillman also includes Open Data Sets and databases that are available to the public.
Marcus Zillman’s guide provides multi-disciplinary researchers a wide range of internet sources to assist in identify, reviewing and engaging the talents of subject matter experts, in the U.S. and abroad. In addition, this guide links to numerous sites and forums that provide answers to a range of questions, from the simple to the complex, from topical matters to technical issues.